Oct 9 07:41:48.054404 kernel: Linux version 6.6.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Tue Oct 8 18:19:34 -00 2024 Oct 9 07:41:48.054429 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=1839da262570fb938be558d95db7fc3d986a0d71e1b77d40d35a3e2a1bac7dcd Oct 9 07:41:48.054442 kernel: BIOS-provided physical RAM map: Oct 9 07:41:48.054449 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Oct 9 07:41:48.054456 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Oct 9 07:41:48.054463 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Oct 9 07:41:48.054472 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Oct 9 07:41:48.054479 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Oct 9 07:41:48.054486 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 9 07:41:48.054496 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Oct 9 07:41:48.054503 kernel: NX (Execute Disable) protection: active Oct 9 07:41:48.054511 kernel: APIC: Static calls initialized Oct 9 07:41:48.054518 kernel: SMBIOS 2.8 present. Oct 9 07:41:48.054546 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014 Oct 9 07:41:48.054555 kernel: Hypervisor detected: KVM Oct 9 07:41:48.054566 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 9 07:41:48.054574 kernel: kvm-clock: using sched offset of 4900702494 cycles Oct 9 07:41:48.054582 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 9 07:41:48.054590 kernel: tsc: Detected 1996.249 MHz processor Oct 9 07:41:48.054598 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 9 07:41:48.054606 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 9 07:41:48.054614 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Oct 9 07:41:48.054622 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Oct 9 07:41:48.054630 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 9 07:41:48.054640 kernel: ACPI: Early table checksum verification disabled Oct 9 07:41:48.054648 kernel: ACPI: RSDP 0x00000000000F5930 000014 (v00 BOCHS ) Oct 9 07:41:48.054656 kernel: ACPI: RSDT 0x000000007FFE1848 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 07:41:48.054664 kernel: ACPI: FACP 0x000000007FFE172C 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 07:41:48.054672 kernel: ACPI: DSDT 0x000000007FFE0040 0016EC (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 07:41:48.054680 kernel: ACPI: FACS 0x000000007FFE0000 000040 Oct 9 07:41:48.054688 kernel: ACPI: APIC 0x000000007FFE17A0 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 07:41:48.054695 kernel: ACPI: WAET 0x000000007FFE1820 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 07:41:48.054703 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe172c-0x7ffe179f] Oct 9 07:41:48.054713 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe172b] Oct 9 07:41:48.054721 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Oct 9 07:41:48.054729 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17a0-0x7ffe181f] Oct 9 07:41:48.054737 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe1820-0x7ffe1847] Oct 9 07:41:48.054744 kernel: No NUMA configuration found Oct 9 07:41:48.054752 kernel: Faking a node at [mem 0x0000000000000000-0x000000007ffdcfff] Oct 9 07:41:48.054760 kernel: NODE_DATA(0) allocated [mem 0x7ffd7000-0x7ffdcfff] Oct 9 07:41:48.054771 kernel: Zone ranges: Oct 9 07:41:48.054782 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 9 07:41:48.054791 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdcfff] Oct 9 07:41:48.054799 kernel: Normal empty Oct 9 07:41:48.054807 kernel: Movable zone start for each node Oct 9 07:41:48.054815 kernel: Early memory node ranges Oct 9 07:41:48.054823 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Oct 9 07:41:48.054834 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Oct 9 07:41:48.054842 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdcfff] Oct 9 07:41:48.054851 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 9 07:41:48.054859 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Oct 9 07:41:48.054867 kernel: On node 0, zone DMA32: 35 pages in unavailable ranges Oct 9 07:41:48.054875 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 9 07:41:48.054883 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 9 07:41:48.054891 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 9 07:41:48.054899 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 9 07:41:48.054910 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 9 07:41:48.054918 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 9 07:41:48.054926 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 9 07:41:48.054934 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 9 07:41:48.054942 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 9 07:41:48.054951 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Oct 9 07:41:48.054959 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Oct 9 07:41:48.054967 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Oct 9 07:41:48.054975 kernel: Booting paravirtualized kernel on KVM Oct 9 07:41:48.054983 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 9 07:41:48.054994 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Oct 9 07:41:48.055002 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576 Oct 9 07:41:48.055010 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152 Oct 9 07:41:48.055018 kernel: pcpu-alloc: [0] 0 1 Oct 9 07:41:48.055026 kernel: kvm-guest: PV spinlocks disabled, no host support Oct 9 07:41:48.055036 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=1839da262570fb938be558d95db7fc3d986a0d71e1b77d40d35a3e2a1bac7dcd Oct 9 07:41:48.055045 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 9 07:41:48.055055 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 9 07:41:48.055063 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Oct 9 07:41:48.055071 kernel: Fallback order for Node 0: 0 Oct 9 07:41:48.055079 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515805 Oct 9 07:41:48.055088 kernel: Policy zone: DMA32 Oct 9 07:41:48.055096 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 9 07:41:48.055104 kernel: Memory: 1965068K/2096620K available (12288K kernel code, 2304K rwdata, 22648K rodata, 49452K init, 1888K bss, 131292K reserved, 0K cma-reserved) Oct 9 07:41:48.055113 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Oct 9 07:41:48.055121 kernel: ftrace: allocating 37706 entries in 148 pages Oct 9 07:41:48.055131 kernel: ftrace: allocated 148 pages with 3 groups Oct 9 07:41:48.055139 kernel: Dynamic Preempt: voluntary Oct 9 07:41:48.055147 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 9 07:41:48.055156 kernel: rcu: RCU event tracing is enabled. Oct 9 07:41:48.055165 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Oct 9 07:41:48.055173 kernel: Trampoline variant of Tasks RCU enabled. Oct 9 07:41:48.055181 kernel: Rude variant of Tasks RCU enabled. Oct 9 07:41:48.055190 kernel: Tracing variant of Tasks RCU enabled. Oct 9 07:41:48.055198 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 9 07:41:48.055208 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Oct 9 07:41:48.055216 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Oct 9 07:41:48.055224 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 9 07:41:48.055232 kernel: Console: colour VGA+ 80x25 Oct 9 07:41:48.055240 kernel: printk: console [tty0] enabled Oct 9 07:41:48.055248 kernel: printk: console [ttyS0] enabled Oct 9 07:41:48.055257 kernel: ACPI: Core revision 20230628 Oct 9 07:41:48.055265 kernel: APIC: Switch to symmetric I/O mode setup Oct 9 07:41:48.055273 kernel: x2apic enabled Oct 9 07:41:48.055281 kernel: APIC: Switched APIC routing to: physical x2apic Oct 9 07:41:48.055292 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 9 07:41:48.055300 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Oct 9 07:41:48.055309 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Oct 9 07:41:48.055317 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Oct 9 07:41:48.055325 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Oct 9 07:41:48.055333 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 9 07:41:48.055341 kernel: Spectre V2 : Mitigation: Retpolines Oct 9 07:41:48.055350 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Oct 9 07:41:48.055358 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Oct 9 07:41:48.055368 kernel: Speculative Store Bypass: Vulnerable Oct 9 07:41:48.055376 kernel: x86/fpu: x87 FPU will use FXSAVE Oct 9 07:41:48.055384 kernel: Freeing SMP alternatives memory: 32K Oct 9 07:41:48.055392 kernel: pid_max: default: 32768 minimum: 301 Oct 9 07:41:48.055400 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Oct 9 07:41:48.055409 kernel: SELinux: Initializing. Oct 9 07:41:48.055417 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 9 07:41:48.055426 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 9 07:41:48.055442 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Oct 9 07:41:48.055450 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 9 07:41:48.055459 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 9 07:41:48.055469 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 9 07:41:48.055478 kernel: Performance Events: AMD PMU driver. Oct 9 07:41:48.055486 kernel: ... version: 0 Oct 9 07:41:48.055495 kernel: ... bit width: 48 Oct 9 07:41:48.055504 kernel: ... generic registers: 4 Oct 9 07:41:48.055514 kernel: ... value mask: 0000ffffffffffff Oct 9 07:41:48.058053 kernel: ... max period: 00007fffffffffff Oct 9 07:41:48.058066 kernel: ... fixed-purpose events: 0 Oct 9 07:41:48.058076 kernel: ... event mask: 000000000000000f Oct 9 07:41:48.058085 kernel: signal: max sigframe size: 1440 Oct 9 07:41:48.058093 kernel: rcu: Hierarchical SRCU implementation. Oct 9 07:41:48.058103 kernel: rcu: Max phase no-delay instances is 400. Oct 9 07:41:48.058111 kernel: smp: Bringing up secondary CPUs ... Oct 9 07:41:48.058120 kernel: smpboot: x86: Booting SMP configuration: Oct 9 07:41:48.058134 kernel: .... node #0, CPUs: #1 Oct 9 07:41:48.058142 kernel: smp: Brought up 1 node, 2 CPUs Oct 9 07:41:48.058151 kernel: smpboot: Max logical packages: 2 Oct 9 07:41:48.058159 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Oct 9 07:41:48.058168 kernel: devtmpfs: initialized Oct 9 07:41:48.058176 kernel: x86/mm: Memory block size: 128MB Oct 9 07:41:48.058185 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 9 07:41:48.058194 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Oct 9 07:41:48.058203 kernel: pinctrl core: initialized pinctrl subsystem Oct 9 07:41:48.058211 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 9 07:41:48.058222 kernel: audit: initializing netlink subsys (disabled) Oct 9 07:41:48.058231 kernel: audit: type=2000 audit(1728459706.681:1): state=initialized audit_enabled=0 res=1 Oct 9 07:41:48.058239 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 9 07:41:48.058248 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 9 07:41:48.058256 kernel: cpuidle: using governor menu Oct 9 07:41:48.058265 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 9 07:41:48.058274 kernel: dca service started, version 1.12.1 Oct 9 07:41:48.058282 kernel: PCI: Using configuration type 1 for base access Oct 9 07:41:48.058291 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 9 07:41:48.058302 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 9 07:41:48.058311 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 9 07:41:48.058319 kernel: ACPI: Added _OSI(Module Device) Oct 9 07:41:48.058328 kernel: ACPI: Added _OSI(Processor Device) Oct 9 07:41:48.058337 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 9 07:41:48.058345 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 9 07:41:48.058354 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 9 07:41:48.058362 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Oct 9 07:41:48.058371 kernel: ACPI: Interpreter enabled Oct 9 07:41:48.058381 kernel: ACPI: PM: (supports S0 S3 S5) Oct 9 07:41:48.058390 kernel: ACPI: Using IOAPIC for interrupt routing Oct 9 07:41:48.058399 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 9 07:41:48.058407 kernel: PCI: Using E820 reservations for host bridge windows Oct 9 07:41:48.058416 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Oct 9 07:41:48.058424 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 9 07:41:48.058581 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Oct 9 07:41:48.058680 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Oct 9 07:41:48.058774 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Oct 9 07:41:48.058787 kernel: acpiphp: Slot [3] registered Oct 9 07:41:48.058796 kernel: acpiphp: Slot [4] registered Oct 9 07:41:48.058805 kernel: acpiphp: Slot [5] registered Oct 9 07:41:48.058813 kernel: acpiphp: Slot [6] registered Oct 9 07:41:48.058822 kernel: acpiphp: Slot [7] registered Oct 9 07:41:48.058831 kernel: acpiphp: Slot [8] registered Oct 9 07:41:48.058839 kernel: acpiphp: Slot [9] registered Oct 9 07:41:48.058851 kernel: acpiphp: Slot [10] registered Oct 9 07:41:48.058859 kernel: acpiphp: Slot [11] registered Oct 9 07:41:48.058868 kernel: acpiphp: Slot [12] registered Oct 9 07:41:48.058876 kernel: acpiphp: Slot [13] registered Oct 9 07:41:48.058885 kernel: acpiphp: Slot [14] registered Oct 9 07:41:48.058893 kernel: acpiphp: Slot [15] registered Oct 9 07:41:48.058902 kernel: acpiphp: Slot [16] registered Oct 9 07:41:48.058910 kernel: acpiphp: Slot [17] registered Oct 9 07:41:48.058919 kernel: acpiphp: Slot [18] registered Oct 9 07:41:48.058927 kernel: acpiphp: Slot [19] registered Oct 9 07:41:48.058937 kernel: acpiphp: Slot [20] registered Oct 9 07:41:48.058946 kernel: acpiphp: Slot [21] registered Oct 9 07:41:48.058954 kernel: acpiphp: Slot [22] registered Oct 9 07:41:48.058963 kernel: acpiphp: Slot [23] registered Oct 9 07:41:48.058971 kernel: acpiphp: Slot [24] registered Oct 9 07:41:48.058980 kernel: acpiphp: Slot [25] registered Oct 9 07:41:48.058988 kernel: acpiphp: Slot [26] registered Oct 9 07:41:48.058997 kernel: acpiphp: Slot [27] registered Oct 9 07:41:48.059005 kernel: acpiphp: Slot [28] registered Oct 9 07:41:48.059016 kernel: acpiphp: Slot [29] registered Oct 9 07:41:48.059024 kernel: acpiphp: Slot [30] registered Oct 9 07:41:48.059033 kernel: acpiphp: Slot [31] registered Oct 9 07:41:48.059042 kernel: PCI host bridge to bus 0000:00 Oct 9 07:41:48.059134 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 9 07:41:48.059225 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 9 07:41:48.059308 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 9 07:41:48.059387 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Oct 9 07:41:48.059470 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Oct 9 07:41:48.059576 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 9 07:41:48.059680 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Oct 9 07:41:48.059785 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Oct 9 07:41:48.059882 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Oct 9 07:41:48.059971 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Oct 9 07:41:48.060065 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Oct 9 07:41:48.060155 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Oct 9 07:41:48.060245 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Oct 9 07:41:48.060333 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Oct 9 07:41:48.060429 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Oct 9 07:41:48.060519 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Oct 9 07:41:48.066690 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Oct 9 07:41:48.066797 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Oct 9 07:41:48.066889 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Oct 9 07:41:48.066980 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Oct 9 07:41:48.067069 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Oct 9 07:41:48.067157 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Oct 9 07:41:48.067246 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 9 07:41:48.067348 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Oct 9 07:41:48.067439 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Oct 9 07:41:48.067544 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Oct 9 07:41:48.067639 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Oct 9 07:41:48.067738 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Oct 9 07:41:48.067837 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Oct 9 07:41:48.067926 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Oct 9 07:41:48.068021 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Oct 9 07:41:48.068109 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Oct 9 07:41:48.068205 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Oct 9 07:41:48.068295 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Oct 9 07:41:48.068383 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Oct 9 07:41:48.068481 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Oct 9 07:41:48.073281 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Oct 9 07:41:48.073382 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Oct 9 07:41:48.073395 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 9 07:41:48.073405 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 9 07:41:48.073414 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 9 07:41:48.073423 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 9 07:41:48.073431 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Oct 9 07:41:48.073440 kernel: iommu: Default domain type: Translated Oct 9 07:41:48.073449 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 9 07:41:48.073458 kernel: PCI: Using ACPI for IRQ routing Oct 9 07:41:48.073470 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 9 07:41:48.073479 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Oct 9 07:41:48.073487 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Oct 9 07:41:48.073594 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Oct 9 07:41:48.073683 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Oct 9 07:41:48.073769 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 9 07:41:48.073782 kernel: vgaarb: loaded Oct 9 07:41:48.073791 kernel: clocksource: Switched to clocksource kvm-clock Oct 9 07:41:48.073800 kernel: VFS: Disk quotas dquot_6.6.0 Oct 9 07:41:48.073812 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 9 07:41:48.073821 kernel: pnp: PnP ACPI init Oct 9 07:41:48.073909 kernel: pnp 00:03: [dma 2] Oct 9 07:41:48.073923 kernel: pnp: PnP ACPI: found 5 devices Oct 9 07:41:48.073933 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 9 07:41:48.073941 kernel: NET: Registered PF_INET protocol family Oct 9 07:41:48.073950 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 9 07:41:48.073960 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Oct 9 07:41:48.073972 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 9 07:41:48.073982 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Oct 9 07:41:48.073990 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Oct 9 07:41:48.073999 kernel: TCP: Hash tables configured (established 16384 bind 16384) Oct 9 07:41:48.074008 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 9 07:41:48.074017 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 9 07:41:48.074026 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 9 07:41:48.074034 kernel: NET: Registered PF_XDP protocol family Oct 9 07:41:48.074113 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 9 07:41:48.074195 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 9 07:41:48.074271 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 9 07:41:48.074347 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Oct 9 07:41:48.074422 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Oct 9 07:41:48.074510 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Oct 9 07:41:48.074640 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Oct 9 07:41:48.074654 kernel: PCI: CLS 0 bytes, default 64 Oct 9 07:41:48.074667 kernel: Initialise system trusted keyrings Oct 9 07:41:48.074676 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Oct 9 07:41:48.074684 kernel: Key type asymmetric registered Oct 9 07:41:48.074693 kernel: Asymmetric key parser 'x509' registered Oct 9 07:41:48.074702 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Oct 9 07:41:48.074710 kernel: io scheduler mq-deadline registered Oct 9 07:41:48.074719 kernel: io scheduler kyber registered Oct 9 07:41:48.074728 kernel: io scheduler bfq registered Oct 9 07:41:48.074736 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 9 07:41:48.074747 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Oct 9 07:41:48.074756 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Oct 9 07:41:48.074765 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Oct 9 07:41:48.074774 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Oct 9 07:41:48.074783 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 9 07:41:48.074792 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 9 07:41:48.074801 kernel: random: crng init done Oct 9 07:41:48.074809 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 9 07:41:48.074818 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 9 07:41:48.074827 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 9 07:41:48.074929 kernel: rtc_cmos 00:04: RTC can wake from S4 Oct 9 07:41:48.074945 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 9 07:41:48.075023 kernel: rtc_cmos 00:04: registered as rtc0 Oct 9 07:41:48.075104 kernel: rtc_cmos 00:04: setting system clock to 2024-10-09T07:41:47 UTC (1728459707) Oct 9 07:41:48.075183 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Oct 9 07:41:48.075196 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Oct 9 07:41:48.075205 kernel: NET: Registered PF_INET6 protocol family Oct 9 07:41:48.075217 kernel: Segment Routing with IPv6 Oct 9 07:41:48.075226 kernel: In-situ OAM (IOAM) with IPv6 Oct 9 07:41:48.075234 kernel: NET: Registered PF_PACKET protocol family Oct 9 07:41:48.075243 kernel: Key type dns_resolver registered Oct 9 07:41:48.075252 kernel: IPI shorthand broadcast: enabled Oct 9 07:41:48.075260 kernel: sched_clock: Marking stable (979008176, 139671717)->(1123646168, -4966275) Oct 9 07:41:48.075269 kernel: registered taskstats version 1 Oct 9 07:41:48.075278 kernel: Loading compiled-in X.509 certificates Oct 9 07:41:48.075286 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.54-flatcar: 0b7ba59a46acf969bcd97270f441857501641c76' Oct 9 07:41:48.075297 kernel: Key type .fscrypt registered Oct 9 07:41:48.075305 kernel: Key type fscrypt-provisioning registered Oct 9 07:41:48.075314 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 9 07:41:48.075323 kernel: ima: Allocated hash algorithm: sha1 Oct 9 07:41:48.075331 kernel: ima: No architecture policies found Oct 9 07:41:48.075340 kernel: clk: Disabling unused clocks Oct 9 07:41:48.075349 kernel: Freeing unused kernel image (initmem) memory: 49452K Oct 9 07:41:48.075357 kernel: Write protecting the kernel read-only data: 36864k Oct 9 07:41:48.075366 kernel: Freeing unused kernel image (rodata/data gap) memory: 1928K Oct 9 07:41:48.075377 kernel: Run /init as init process Oct 9 07:41:48.075385 kernel: with arguments: Oct 9 07:41:48.075394 kernel: /init Oct 9 07:41:48.075402 kernel: with environment: Oct 9 07:41:48.075411 kernel: HOME=/ Oct 9 07:41:48.075419 kernel: TERM=linux Oct 9 07:41:48.075427 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 9 07:41:48.075438 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 9 07:41:48.075452 systemd[1]: Detected virtualization kvm. Oct 9 07:41:48.075461 systemd[1]: Detected architecture x86-64. Oct 9 07:41:48.075471 systemd[1]: Running in initrd. Oct 9 07:41:48.075480 systemd[1]: No hostname configured, using default hostname. Oct 9 07:41:48.075489 systemd[1]: Hostname set to . Oct 9 07:41:48.075499 systemd[1]: Initializing machine ID from VM UUID. Oct 9 07:41:48.075509 systemd[1]: Queued start job for default target initrd.target. Oct 9 07:41:48.075519 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 07:41:48.077217 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 07:41:48.077228 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 9 07:41:48.077238 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 9 07:41:48.077248 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 9 07:41:48.077258 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 9 07:41:48.077269 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 9 07:41:48.077283 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 9 07:41:48.077293 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 07:41:48.077302 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 9 07:41:48.077312 systemd[1]: Reached target paths.target - Path Units. Oct 9 07:41:48.077322 systemd[1]: Reached target slices.target - Slice Units. Oct 9 07:41:48.077341 systemd[1]: Reached target swap.target - Swaps. Oct 9 07:41:48.077352 systemd[1]: Reached target timers.target - Timer Units. Oct 9 07:41:48.077364 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 9 07:41:48.077373 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 9 07:41:48.077385 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 9 07:41:48.077395 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 9 07:41:48.077404 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 9 07:41:48.077414 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 9 07:41:48.077424 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 07:41:48.077434 systemd[1]: Reached target sockets.target - Socket Units. Oct 9 07:41:48.077445 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 9 07:41:48.077455 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 9 07:41:48.077464 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 9 07:41:48.077474 systemd[1]: Starting systemd-fsck-usr.service... Oct 9 07:41:48.077484 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 9 07:41:48.077493 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 9 07:41:48.077571 systemd-journald[183]: Collecting audit messages is disabled. Oct 9 07:41:48.077600 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 07:41:48.077610 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 9 07:41:48.077620 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 07:41:48.077629 systemd[1]: Finished systemd-fsck-usr.service. Oct 9 07:41:48.077642 systemd-journald[183]: Journal started Oct 9 07:41:48.077663 systemd-journald[183]: Runtime Journal (/run/log/journal/ca56d9d69132450b99071028228fd241) is 4.9M, max 39.3M, 34.4M free. Oct 9 07:41:48.086070 systemd-modules-load[184]: Inserted module 'overlay' Oct 9 07:41:48.127744 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 9 07:41:48.127772 systemd[1]: Started systemd-journald.service - Journal Service. Oct 9 07:41:48.127787 kernel: Bridge firewalling registered Oct 9 07:41:48.126630 systemd-modules-load[184]: Inserted module 'br_netfilter' Oct 9 07:41:48.128822 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 9 07:41:48.129516 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 07:41:48.136693 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 07:41:48.146716 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 9 07:41:48.149672 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 9 07:41:48.157391 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Oct 9 07:41:48.160453 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 9 07:41:48.162569 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 9 07:41:48.169819 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 9 07:41:48.171991 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 07:41:48.180694 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 9 07:41:48.181451 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Oct 9 07:41:48.182116 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 07:41:48.186472 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 9 07:41:48.195442 dracut-cmdline[217]: dracut-dracut-053 Oct 9 07:41:48.199393 dracut-cmdline[217]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=1839da262570fb938be558d95db7fc3d986a0d71e1b77d40d35a3e2a1bac7dcd Oct 9 07:41:48.236120 systemd-resolved[220]: Positive Trust Anchors: Oct 9 07:41:48.236945 systemd-resolved[220]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 9 07:41:48.237729 systemd-resolved[220]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Oct 9 07:41:48.242749 systemd-resolved[220]: Defaulting to hostname 'linux'. Oct 9 07:41:48.244712 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 9 07:41:48.245269 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 9 07:41:48.278568 kernel: SCSI subsystem initialized Oct 9 07:41:48.290586 kernel: Loading iSCSI transport class v2.0-870. Oct 9 07:41:48.304635 kernel: iscsi: registered transport (tcp) Oct 9 07:41:48.332660 kernel: iscsi: registered transport (qla4xxx) Oct 9 07:41:48.332787 kernel: QLogic iSCSI HBA Driver Oct 9 07:41:48.394754 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 9 07:41:48.400680 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 9 07:41:48.484923 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 9 07:41:48.485044 kernel: device-mapper: uevent: version 1.0.3 Oct 9 07:41:48.485080 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Oct 9 07:41:48.567624 kernel: raid6: sse2x4 gen() 5201 MB/s Oct 9 07:41:48.584680 kernel: raid6: sse2x2 gen() 9632 MB/s Oct 9 07:41:48.601713 kernel: raid6: sse2x1 gen() 10152 MB/s Oct 9 07:41:48.601811 kernel: raid6: using algorithm sse2x1 gen() 10152 MB/s Oct 9 07:41:48.619861 kernel: raid6: .... xor() 7409 MB/s, rmw enabled Oct 9 07:41:48.619928 kernel: raid6: using ssse3x2 recovery algorithm Oct 9 07:41:48.648790 kernel: xor: measuring software checksum speed Oct 9 07:41:48.648856 kernel: prefetch64-sse : 17236 MB/sec Oct 9 07:41:48.649787 kernel: generic_sse : 15712 MB/sec Oct 9 07:41:48.650653 kernel: xor: using function: prefetch64-sse (17236 MB/sec) Oct 9 07:41:48.854649 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 9 07:41:48.873456 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 9 07:41:48.889852 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 07:41:48.901486 systemd-udevd[403]: Using default interface naming scheme 'v255'. Oct 9 07:41:48.905933 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 07:41:48.918754 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 9 07:41:48.954554 dracut-pre-trigger[416]: rd.md=0: removing MD RAID activation Oct 9 07:41:48.999631 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 9 07:41:49.005884 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 9 07:41:49.047199 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 07:41:49.061226 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 9 07:41:49.113450 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 9 07:41:49.115966 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 9 07:41:49.116476 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 07:41:49.117072 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 9 07:41:49.124704 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 9 07:41:49.144557 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues Oct 9 07:41:49.146345 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 9 07:41:49.156271 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 9 07:41:49.157837 kernel: virtio_blk virtio2: [vda] 41943040 512-byte logical blocks (21.5 GB/20.0 GiB) Oct 9 07:41:49.156418 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 07:41:49.159284 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 07:41:49.159880 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 07:41:49.160010 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 07:41:49.176989 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 9 07:41:49.177034 kernel: GPT:17805311 != 41943039 Oct 9 07:41:49.177057 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 9 07:41:49.177070 kernel: GPT:17805311 != 41943039 Oct 9 07:41:49.177082 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 9 07:41:49.177094 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 07:41:49.160508 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 07:41:49.169904 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 07:41:49.198554 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (455) Oct 9 07:41:49.212547 kernel: BTRFS: device fsid a442e753-4749-4732-ba27-ea845965fe4a devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (466) Oct 9 07:41:49.228556 kernel: libata version 3.00 loaded. Oct 9 07:41:49.232004 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 9 07:41:49.260480 kernel: ata_piix 0000:00:01.1: version 2.13 Oct 9 07:41:49.260720 kernel: scsi host0: ata_piix Oct 9 07:41:49.260851 kernel: scsi host1: ata_piix Oct 9 07:41:49.260970 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Oct 9 07:41:49.260983 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Oct 9 07:41:49.266552 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 9 07:41:49.267443 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 07:41:49.273954 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 9 07:41:49.278544 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Oct 9 07:41:49.279095 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 9 07:41:49.291716 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 9 07:41:49.295659 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 07:41:49.305110 disk-uuid[501]: Primary Header is updated. Oct 9 07:41:49.305110 disk-uuid[501]: Secondary Entries is updated. Oct 9 07:41:49.305110 disk-uuid[501]: Secondary Header is updated. Oct 9 07:41:49.317596 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 07:41:49.325400 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 07:41:49.325161 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 07:41:50.340626 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 07:41:50.342457 disk-uuid[502]: The operation has completed successfully. Oct 9 07:41:50.419633 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 9 07:41:50.420727 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 9 07:41:50.466641 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 9 07:41:50.472050 sh[523]: Success Oct 9 07:41:50.490678 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Oct 9 07:41:50.558355 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 9 07:41:50.567955 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 9 07:41:50.579896 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 9 07:41:50.599584 kernel: BTRFS info (device dm-0): first mount of filesystem a442e753-4749-4732-ba27-ea845965fe4a Oct 9 07:41:50.599644 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 9 07:41:50.604728 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Oct 9 07:41:50.606978 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 9 07:41:50.607025 kernel: BTRFS info (device dm-0): using free space tree Oct 9 07:41:50.619159 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 9 07:41:50.620275 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 9 07:41:50.629743 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 9 07:41:50.634823 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 9 07:41:50.644724 kernel: BTRFS info (device vda6): first mount of filesystem aa256cb8-f25c-41d0-8582-dc8cedfde7ce Oct 9 07:41:50.644767 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 9 07:41:50.649602 kernel: BTRFS info (device vda6): using free space tree Oct 9 07:41:50.661573 kernel: BTRFS info (device vda6): auto enabling async discard Oct 9 07:41:50.678039 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 9 07:41:50.679731 kernel: BTRFS info (device vda6): last unmount of filesystem aa256cb8-f25c-41d0-8582-dc8cedfde7ce Oct 9 07:41:50.701125 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 9 07:41:50.710902 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 9 07:41:50.752374 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 9 07:41:50.761343 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 9 07:41:50.790425 systemd-networkd[707]: lo: Link UP Oct 9 07:41:50.790434 systemd-networkd[707]: lo: Gained carrier Oct 9 07:41:50.792205 systemd-networkd[707]: Enumeration completed Oct 9 07:41:50.792347 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 9 07:41:50.792805 systemd-networkd[707]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 07:41:50.792809 systemd-networkd[707]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 9 07:41:50.793708 systemd-networkd[707]: eth0: Link UP Oct 9 07:41:50.793712 systemd-networkd[707]: eth0: Gained carrier Oct 9 07:41:50.793719 systemd-networkd[707]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 07:41:50.794791 systemd[1]: Reached target network.target - Network. Oct 9 07:41:50.807596 systemd-networkd[707]: eth0: DHCPv4 address 172.24.4.70/24, gateway 172.24.4.1 acquired from 172.24.4.1 Oct 9 07:41:50.868896 ignition[650]: Ignition 2.18.0 Oct 9 07:41:50.868920 ignition[650]: Stage: fetch-offline Oct 9 07:41:50.869014 ignition[650]: no configs at "/usr/lib/ignition/base.d" Oct 9 07:41:50.869039 ignition[650]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 9 07:41:50.871313 ignition[650]: parsed url from cmdline: "" Oct 9 07:41:50.871324 ignition[650]: no config URL provided Oct 9 07:41:50.871339 ignition[650]: reading system config file "/usr/lib/ignition/user.ign" Oct 9 07:41:50.874232 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 9 07:41:50.871360 ignition[650]: no config at "/usr/lib/ignition/user.ign" Oct 9 07:41:50.871372 ignition[650]: failed to fetch config: resource requires networking Oct 9 07:41:50.871843 ignition[650]: Ignition finished successfully Oct 9 07:41:50.883731 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Oct 9 07:41:50.919831 ignition[716]: Ignition 2.18.0 Oct 9 07:41:50.919859 ignition[716]: Stage: fetch Oct 9 07:41:50.920234 ignition[716]: no configs at "/usr/lib/ignition/base.d" Oct 9 07:41:50.920261 ignition[716]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 9 07:41:50.920468 ignition[716]: parsed url from cmdline: "" Oct 9 07:41:50.920478 ignition[716]: no config URL provided Oct 9 07:41:50.920491 ignition[716]: reading system config file "/usr/lib/ignition/user.ign" Oct 9 07:41:50.920518 ignition[716]: no config at "/usr/lib/ignition/user.ign" Oct 9 07:41:50.920938 ignition[716]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Oct 9 07:41:50.920976 ignition[716]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Oct 9 07:41:50.921093 ignition[716]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Oct 9 07:41:51.235462 ignition[716]: GET result: OK Oct 9 07:41:51.237421 ignition[716]: parsing config with SHA512: 43fa89f94d3ea528f48ed15b2fc5c0ad2ceadfe24fb6cf365fd40bcea07642f3aa0df812c5d0e86821d0cff7050cf36bb3ed9c02689b72deeeeb189856fe44cb Oct 9 07:41:51.251033 unknown[716]: fetched base config from "system" Oct 9 07:41:51.251073 unknown[716]: fetched base config from "system" Oct 9 07:41:51.251097 unknown[716]: fetched user config from "openstack" Oct 9 07:41:51.254273 ignition[716]: fetch: fetch complete Oct 9 07:41:51.254295 ignition[716]: fetch: fetch passed Oct 9 07:41:51.257748 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Oct 9 07:41:51.254420 ignition[716]: Ignition finished successfully Oct 9 07:41:51.268890 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 9 07:41:51.302803 ignition[723]: Ignition 2.18.0 Oct 9 07:41:51.302830 ignition[723]: Stage: kargs Oct 9 07:41:51.303222 ignition[723]: no configs at "/usr/lib/ignition/base.d" Oct 9 07:41:51.303249 ignition[723]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 9 07:41:51.305814 ignition[723]: kargs: kargs passed Oct 9 07:41:51.308011 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 9 07:41:51.305915 ignition[723]: Ignition finished successfully Oct 9 07:41:51.323344 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 9 07:41:51.351803 ignition[731]: Ignition 2.18.0 Oct 9 07:41:51.351832 ignition[731]: Stage: disks Oct 9 07:41:51.352256 ignition[731]: no configs at "/usr/lib/ignition/base.d" Oct 9 07:41:51.352284 ignition[731]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 9 07:41:51.354790 ignition[731]: disks: disks passed Oct 9 07:41:51.356956 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 9 07:41:51.354895 ignition[731]: Ignition finished successfully Oct 9 07:41:51.359908 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 9 07:41:51.362455 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 9 07:41:51.364926 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 9 07:41:51.367662 systemd[1]: Reached target sysinit.target - System Initialization. Oct 9 07:41:51.370447 systemd[1]: Reached target basic.target - Basic System. Oct 9 07:41:51.380833 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 9 07:41:51.412225 systemd-fsck[740]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Oct 9 07:41:51.424925 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 9 07:41:51.433790 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 9 07:41:51.595544 kernel: EXT4-fs (vda9): mounted filesystem ef891253-2811-499a-a9aa-02f0764c1b95 r/w with ordered data mode. Quota mode: none. Oct 9 07:41:51.597040 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 9 07:41:51.599613 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 9 07:41:51.610704 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 9 07:41:51.613899 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 9 07:41:51.615599 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 9 07:41:51.617846 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Oct 9 07:41:51.620899 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 9 07:41:51.620956 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 9 07:41:51.626725 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (748) Oct 9 07:41:51.632386 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 9 07:41:51.634668 kernel: BTRFS info (device vda6): first mount of filesystem aa256cb8-f25c-41d0-8582-dc8cedfde7ce Oct 9 07:41:51.635878 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 9 07:41:51.635935 kernel: BTRFS info (device vda6): using free space tree Oct 9 07:41:51.642601 kernel: BTRFS info (device vda6): auto enabling async discard Oct 9 07:41:51.647009 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 9 07:41:51.656917 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 9 07:41:51.792245 initrd-setup-root[776]: cut: /sysroot/etc/passwd: No such file or directory Oct 9 07:41:51.801387 initrd-setup-root[783]: cut: /sysroot/etc/group: No such file or directory Oct 9 07:41:51.806780 initrd-setup-root[790]: cut: /sysroot/etc/shadow: No such file or directory Oct 9 07:41:51.813263 initrd-setup-root[797]: cut: /sysroot/etc/gshadow: No such file or directory Oct 9 07:41:51.933230 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 9 07:41:51.940719 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 9 07:41:51.943847 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 9 07:41:51.975344 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 9 07:41:51.980308 kernel: BTRFS info (device vda6): last unmount of filesystem aa256cb8-f25c-41d0-8582-dc8cedfde7ce Oct 9 07:41:52.019839 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 9 07:41:52.021628 ignition[871]: INFO : Ignition 2.18.0 Oct 9 07:41:52.021628 ignition[871]: INFO : Stage: mount Oct 9 07:41:52.022759 ignition[871]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 07:41:52.022759 ignition[871]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 9 07:41:52.022759 ignition[871]: INFO : mount: mount passed Oct 9 07:41:52.022759 ignition[871]: INFO : Ignition finished successfully Oct 9 07:41:52.023612 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 9 07:41:52.056750 systemd-networkd[707]: eth0: Gained IPv6LL Oct 9 07:41:58.870473 coreos-metadata[750]: Oct 09 07:41:58.870 WARN failed to locate config-drive, using the metadata service API instead Oct 9 07:41:58.911231 coreos-metadata[750]: Oct 09 07:41:58.911 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Oct 9 07:41:58.927603 coreos-metadata[750]: Oct 09 07:41:58.927 INFO Fetch successful Oct 9 07:41:58.929244 coreos-metadata[750]: Oct 09 07:41:58.929 INFO wrote hostname ci-3975-2-2-3-e7db599e29.novalocal to /sysroot/etc/hostname Oct 9 07:41:58.931463 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Oct 9 07:41:58.931790 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Oct 9 07:41:58.943797 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 9 07:41:58.981922 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 9 07:41:58.997581 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (888) Oct 9 07:41:59.003607 kernel: BTRFS info (device vda6): first mount of filesystem aa256cb8-f25c-41d0-8582-dc8cedfde7ce Oct 9 07:41:59.003701 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 9 07:41:59.006337 kernel: BTRFS info (device vda6): using free space tree Oct 9 07:41:59.015605 kernel: BTRFS info (device vda6): auto enabling async discard Oct 9 07:41:59.021494 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 9 07:41:59.062996 ignition[906]: INFO : Ignition 2.18.0 Oct 9 07:41:59.062996 ignition[906]: INFO : Stage: files Oct 9 07:41:59.065874 ignition[906]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 07:41:59.065874 ignition[906]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 9 07:41:59.065874 ignition[906]: DEBUG : files: compiled without relabeling support, skipping Oct 9 07:41:59.071342 ignition[906]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 9 07:41:59.071342 ignition[906]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 9 07:41:59.075732 ignition[906]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 9 07:41:59.075732 ignition[906]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 9 07:41:59.080953 ignition[906]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 9 07:41:59.077037 unknown[906]: wrote ssh authorized keys file for user: core Oct 9 07:41:59.084952 ignition[906]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Oct 9 07:41:59.084952 ignition[906]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Oct 9 07:41:59.084952 ignition[906]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Oct 9 07:41:59.084952 ignition[906]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Oct 9 07:41:59.165571 ignition[906]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Oct 9 07:41:59.476173 ignition[906]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Oct 9 07:41:59.476173 ignition[906]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Oct 9 07:41:59.480984 ignition[906]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Oct 9 07:41:59.480984 ignition[906]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 9 07:41:59.480984 ignition[906]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 9 07:41:59.480984 ignition[906]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 9 07:41:59.480984 ignition[906]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 9 07:41:59.480984 ignition[906]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 9 07:41:59.480984 ignition[906]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 9 07:41:59.480984 ignition[906]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 9 07:41:59.480984 ignition[906]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 9 07:41:59.480984 ignition[906]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Oct 9 07:41:59.480984 ignition[906]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Oct 9 07:41:59.480984 ignition[906]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Oct 9 07:41:59.480984 ignition[906]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Oct 9 07:41:59.892600 ignition[906]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Oct 9 07:42:01.641861 ignition[906]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Oct 9 07:42:01.641861 ignition[906]: INFO : files: op(c): [started] processing unit "containerd.service" Oct 9 07:42:01.647574 ignition[906]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Oct 9 07:42:01.647574 ignition[906]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Oct 9 07:42:01.647574 ignition[906]: INFO : files: op(c): [finished] processing unit "containerd.service" Oct 9 07:42:01.647574 ignition[906]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Oct 9 07:42:01.647574 ignition[906]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 9 07:42:01.647574 ignition[906]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 9 07:42:01.647574 ignition[906]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Oct 9 07:42:01.647574 ignition[906]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Oct 9 07:42:01.647574 ignition[906]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Oct 9 07:42:01.647574 ignition[906]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 9 07:42:01.647574 ignition[906]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 9 07:42:01.647574 ignition[906]: INFO : files: files passed Oct 9 07:42:01.647574 ignition[906]: INFO : Ignition finished successfully Oct 9 07:42:01.646590 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 9 07:42:01.659318 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 9 07:42:01.663682 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 9 07:42:01.665982 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 9 07:42:01.689452 initrd-setup-root-after-ignition[934]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 9 07:42:01.689452 initrd-setup-root-after-ignition[934]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 9 07:42:01.666085 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 9 07:42:01.696172 initrd-setup-root-after-ignition[938]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 9 07:42:01.692366 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 9 07:42:01.694971 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 9 07:42:01.703706 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 9 07:42:01.735109 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 9 07:42:01.735331 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 9 07:42:01.737573 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 9 07:42:01.742838 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 9 07:42:01.744728 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 9 07:42:01.749795 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 9 07:42:01.766022 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 9 07:42:01.776810 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 9 07:42:01.786702 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 9 07:42:01.787367 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 07:42:01.788078 systemd[1]: Stopped target timers.target - Timer Units. Oct 9 07:42:01.790095 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 9 07:42:01.790213 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 9 07:42:01.792893 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 9 07:42:01.793904 systemd[1]: Stopped target basic.target - Basic System. Oct 9 07:42:01.795630 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 9 07:42:01.797837 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 9 07:42:01.799594 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 9 07:42:01.801418 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 9 07:42:01.803474 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 9 07:42:01.805590 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 9 07:42:01.807611 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 9 07:42:01.809546 systemd[1]: Stopped target swap.target - Swaps. Oct 9 07:42:01.811555 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 9 07:42:01.811687 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 9 07:42:01.814344 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 9 07:42:01.815375 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 07:42:01.817000 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 9 07:42:01.819201 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 07:42:01.820494 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 9 07:42:01.820662 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 9 07:42:01.823813 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 9 07:42:01.823954 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 9 07:42:01.824947 systemd[1]: ignition-files.service: Deactivated successfully. Oct 9 07:42:01.825071 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 9 07:42:01.843058 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 9 07:42:01.843678 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 9 07:42:01.843862 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 07:42:01.847893 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 9 07:42:01.849072 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 9 07:42:01.849332 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 07:42:01.850182 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 9 07:42:01.850392 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 9 07:42:01.860972 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 9 07:42:01.862411 ignition[959]: INFO : Ignition 2.18.0 Oct 9 07:42:01.862411 ignition[959]: INFO : Stage: umount Oct 9 07:42:01.862411 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 07:42:01.862411 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 9 07:42:01.861073 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 9 07:42:01.870482 ignition[959]: INFO : umount: umount passed Oct 9 07:42:01.870482 ignition[959]: INFO : Ignition finished successfully Oct 9 07:42:01.866745 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 9 07:42:01.866863 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 9 07:42:01.869831 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 9 07:42:01.869917 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 9 07:42:01.872791 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 9 07:42:01.872839 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 9 07:42:01.873844 systemd[1]: ignition-fetch.service: Deactivated successfully. Oct 9 07:42:01.873888 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Oct 9 07:42:01.874421 systemd[1]: Stopped target network.target - Network. Oct 9 07:42:01.874975 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 9 07:42:01.875022 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 9 07:42:01.879648 systemd[1]: Stopped target paths.target - Path Units. Oct 9 07:42:01.880210 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 9 07:42:01.883970 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 07:42:01.884921 systemd[1]: Stopped target slices.target - Slice Units. Oct 9 07:42:01.885441 systemd[1]: Stopped target sockets.target - Socket Units. Oct 9 07:42:01.886022 systemd[1]: iscsid.socket: Deactivated successfully. Oct 9 07:42:01.886070 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 9 07:42:01.887738 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 9 07:42:01.887783 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 9 07:42:01.888839 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 9 07:42:01.888889 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 9 07:42:01.889881 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 9 07:42:01.889922 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 9 07:42:01.890972 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 9 07:42:01.892248 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 9 07:42:01.894260 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 9 07:42:01.895567 systemd-networkd[707]: eth0: DHCPv6 lease lost Oct 9 07:42:01.898168 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 9 07:42:01.898276 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 9 07:42:01.900605 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 9 07:42:01.900700 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 9 07:42:01.903039 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 9 07:42:01.903209 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 9 07:42:01.918660 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 9 07:42:01.919437 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 9 07:42:01.919492 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 9 07:42:01.920088 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 9 07:42:01.920130 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 9 07:42:01.920713 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 9 07:42:01.920754 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 9 07:42:01.921845 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 9 07:42:01.921886 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Oct 9 07:42:01.927373 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 07:42:01.936395 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 9 07:42:01.936511 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 9 07:42:01.937852 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 9 07:42:01.937982 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 07:42:01.939317 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 9 07:42:01.939369 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 9 07:42:01.940715 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 9 07:42:01.940748 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 07:42:01.941901 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 9 07:42:01.941944 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 9 07:42:01.943602 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 9 07:42:01.943646 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 9 07:42:01.944803 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 9 07:42:01.944858 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 07:42:01.950710 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 9 07:42:01.951277 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 9 07:42:01.951327 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 07:42:01.953478 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 07:42:01.953567 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 07:42:01.957010 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 9 07:42:01.957099 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 9 07:42:01.990798 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 9 07:42:01.990919 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 9 07:42:01.993008 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 9 07:42:01.994892 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 9 07:42:01.994946 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 9 07:42:02.002706 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 9 07:42:02.017318 systemd[1]: Switching root. Oct 9 07:42:02.046667 systemd-journald[183]: Journal stopped Oct 9 07:42:03.901737 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Oct 9 07:42:03.901801 kernel: SELinux: policy capability network_peer_controls=1 Oct 9 07:42:03.901821 kernel: SELinux: policy capability open_perms=1 Oct 9 07:42:03.901834 kernel: SELinux: policy capability extended_socket_class=1 Oct 9 07:42:03.901849 kernel: SELinux: policy capability always_check_network=0 Oct 9 07:42:03.901861 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 9 07:42:03.901877 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 9 07:42:03.901888 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 9 07:42:03.901900 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 9 07:42:03.901912 kernel: audit: type=1403 audit(1728459722.972:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 9 07:42:03.901930 systemd[1]: Successfully loaded SELinux policy in 68.800ms. Oct 9 07:42:03.901945 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 25.166ms. Oct 9 07:42:03.901959 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 9 07:42:03.901972 systemd[1]: Detected virtualization kvm. Oct 9 07:42:03.901987 systemd[1]: Detected architecture x86-64. Oct 9 07:42:03.902004 systemd[1]: Detected first boot. Oct 9 07:42:03.902016 systemd[1]: Hostname set to . Oct 9 07:42:03.902027 systemd[1]: Initializing machine ID from VM UUID. Oct 9 07:42:03.902039 zram_generator::config[1018]: No configuration found. Oct 9 07:42:03.902056 systemd[1]: Populated /etc with preset unit settings. Oct 9 07:42:03.902068 systemd[1]: Queued start job for default target multi-user.target. Oct 9 07:42:03.902084 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 9 07:42:03.902097 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 9 07:42:03.902110 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 9 07:42:03.902123 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 9 07:42:03.902134 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 9 07:42:03.902146 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 9 07:42:03.902158 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 9 07:42:03.902170 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 9 07:42:03.902182 systemd[1]: Created slice user.slice - User and Session Slice. Oct 9 07:42:03.902194 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 07:42:03.902206 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 07:42:03.902220 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 9 07:42:03.902232 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 9 07:42:03.902244 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 9 07:42:03.902257 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 9 07:42:03.902269 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 9 07:42:03.902281 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 07:42:03.902293 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 9 07:42:03.902305 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 07:42:03.902318 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 9 07:42:03.902331 systemd[1]: Reached target slices.target - Slice Units. Oct 9 07:42:03.902342 systemd[1]: Reached target swap.target - Swaps. Oct 9 07:42:03.902354 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 9 07:42:03.902366 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 9 07:42:03.902378 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 9 07:42:03.902390 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 9 07:42:03.902404 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 9 07:42:03.902416 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 9 07:42:03.902427 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 07:42:03.902439 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 9 07:42:03.902452 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 9 07:42:03.902464 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 9 07:42:03.902475 systemd[1]: Mounting media.mount - External Media Directory... Oct 9 07:42:03.902488 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:42:03.902500 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 9 07:42:03.902519 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 9 07:42:03.902557 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 9 07:42:03.902569 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 9 07:42:03.902581 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 07:42:03.902593 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 9 07:42:03.902605 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 9 07:42:03.902617 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 07:42:03.902628 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 9 07:42:03.902641 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 07:42:03.902656 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 9 07:42:03.902667 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 07:42:03.902679 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 9 07:42:03.902691 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Oct 9 07:42:03.902704 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Oct 9 07:42:03.902716 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 9 07:42:03.902727 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 9 07:42:03.902739 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 9 07:42:03.902753 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 9 07:42:03.902781 systemd-journald[1128]: Collecting audit messages is disabled. Oct 9 07:42:03.902805 systemd-journald[1128]: Journal started Oct 9 07:42:03.902828 systemd-journald[1128]: Runtime Journal (/run/log/journal/ca56d9d69132450b99071028228fd241) is 4.9M, max 39.3M, 34.4M free. Oct 9 07:42:03.909559 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 9 07:42:03.916576 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:42:03.931320 systemd[1]: Started systemd-journald.service - Journal Service. Oct 9 07:42:03.925927 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 9 07:42:03.927689 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 9 07:42:03.928258 systemd[1]: Mounted media.mount - External Media Directory. Oct 9 07:42:03.929806 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 9 07:42:03.930369 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 9 07:42:03.931730 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 9 07:42:03.934149 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 9 07:42:03.935046 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 07:42:03.935797 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 9 07:42:03.935956 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 9 07:42:03.939869 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 07:42:03.940041 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 07:42:03.940905 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 07:42:03.941063 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 07:42:03.944757 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 9 07:42:03.946177 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 9 07:42:03.947905 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 9 07:42:03.961016 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 9 07:42:03.985564 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 9 07:42:03.986170 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 9 07:42:03.991640 kernel: fuse: init (API version 7.39) Oct 9 07:42:03.989654 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 9 07:42:03.997551 kernel: ACPI: bus type drm_connector registered Oct 9 07:42:04.000587 kernel: loop: module loaded Oct 9 07:42:04.008152 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 9 07:42:04.008860 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 9 07:42:04.011387 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 9 07:42:04.020736 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 9 07:42:04.021021 systemd-journald[1128]: Time spent on flushing to /var/log/journal/ca56d9d69132450b99071028228fd241 is 21.299ms for 912 entries. Oct 9 07:42:04.021021 systemd-journald[1128]: System Journal (/var/log/journal/ca56d9d69132450b99071028228fd241) is 8.0M, max 584.8M, 576.8M free. Oct 9 07:42:04.047816 systemd-journald[1128]: Received client request to flush runtime journal. Oct 9 07:42:04.038051 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 9 07:42:04.041775 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 9 07:42:04.041959 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 9 07:42:04.044512 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 9 07:42:04.044735 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 9 07:42:04.045456 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 07:42:04.047742 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 07:42:04.050043 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 07:42:04.051960 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 9 07:42:04.053103 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 9 07:42:04.059908 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 9 07:42:04.067223 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 9 07:42:04.077318 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 9 07:42:04.079720 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 9 07:42:04.088061 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Oct 9 07:42:04.093370 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 9 07:42:04.095845 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 9 07:42:04.104696 systemd-tmpfiles[1167]: ACLs are not supported, ignoring. Oct 9 07:42:04.104947 systemd-tmpfiles[1167]: ACLs are not supported, ignoring. Oct 9 07:42:04.114361 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 9 07:42:04.125666 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 9 07:42:04.128690 udevadm[1184]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Oct 9 07:42:04.157332 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 9 07:42:04.163817 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 9 07:42:04.180283 systemd-tmpfiles[1199]: ACLs are not supported, ignoring. Oct 9 07:42:04.180306 systemd-tmpfiles[1199]: ACLs are not supported, ignoring. Oct 9 07:42:04.185345 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 07:42:05.203758 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 9 07:42:05.209742 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 07:42:05.247786 systemd-udevd[1205]: Using default interface naming scheme 'v255'. Oct 9 07:42:05.293935 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 07:42:05.305829 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 9 07:42:05.347804 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 9 07:42:05.373405 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Oct 9 07:42:05.395929 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1215) Oct 9 07:42:05.413869 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 9 07:42:05.414597 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1218) Oct 9 07:42:05.496007 systemd-networkd[1211]: lo: Link UP Oct 9 07:42:05.496018 systemd-networkd[1211]: lo: Gained carrier Oct 9 07:42:05.497363 systemd-networkd[1211]: Enumeration completed Oct 9 07:42:05.497492 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 9 07:42:05.498241 systemd-networkd[1211]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 07:42:05.498246 systemd-networkd[1211]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 9 07:42:05.498993 systemd-networkd[1211]: eth0: Link UP Oct 9 07:42:05.498997 systemd-networkd[1211]: eth0: Gained carrier Oct 9 07:42:05.499009 systemd-networkd[1211]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 07:42:05.503653 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 9 07:42:05.507651 systemd-networkd[1211]: eth0: DHCPv4 address 172.24.4.70/24, gateway 172.24.4.1 acquired from 172.24.4.1 Oct 9 07:42:05.509683 systemd-networkd[1211]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 07:42:05.524691 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Oct 9 07:42:05.543559 kernel: ACPI: button: Power Button [PWRF] Oct 9 07:42:05.548028 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 9 07:42:05.559566 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Oct 9 07:42:05.571300 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Oct 9 07:42:05.575634 kernel: mousedev: PS/2 mouse device common for all mice Oct 9 07:42:05.590854 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 07:42:05.607479 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Oct 9 07:42:05.607546 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Oct 9 07:42:05.609826 kernel: Console: switching to colour dummy device 80x25 Oct 9 07:42:05.610695 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Oct 9 07:42:05.610722 kernel: [drm] features: -context_init Oct 9 07:42:05.612634 kernel: [drm] number of scanouts: 1 Oct 9 07:42:05.612705 kernel: [drm] number of cap sets: 0 Oct 9 07:42:05.621600 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Oct 9 07:42:05.617318 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 07:42:05.617657 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 07:42:05.626646 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Oct 9 07:42:05.626705 kernel: Console: switching to colour frame buffer device 128x48 Oct 9 07:42:05.630572 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Oct 9 07:42:05.631920 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 07:42:05.639374 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 07:42:05.640459 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 07:42:05.646684 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 07:42:05.649750 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Oct 9 07:42:05.653654 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Oct 9 07:42:05.677437 lvm[1249]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 9 07:42:05.704335 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Oct 9 07:42:05.707184 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 9 07:42:05.718866 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Oct 9 07:42:05.724932 lvm[1254]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 9 07:42:05.747506 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 07:42:05.751512 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Oct 9 07:42:05.755593 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 9 07:42:05.755765 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 9 07:42:05.755790 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 9 07:42:05.755917 systemd[1]: Reached target machines.target - Containers. Oct 9 07:42:05.758340 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Oct 9 07:42:05.766788 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 9 07:42:05.770515 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 9 07:42:05.773647 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 07:42:05.781844 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 9 07:42:05.788860 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Oct 9 07:42:05.800882 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 9 07:42:05.806467 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 9 07:42:05.818700 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 9 07:42:05.835559 kernel: loop0: detected capacity change from 0 to 80568 Oct 9 07:42:05.841973 kernel: block loop0: the capability attribute has been deprecated. Oct 9 07:42:05.873456 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 9 07:42:05.874187 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Oct 9 07:42:05.932762 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 9 07:42:05.966589 kernel: loop1: detected capacity change from 0 to 211296 Oct 9 07:42:06.019517 kernel: loop2: detected capacity change from 0 to 8 Oct 9 07:42:06.049241 kernel: loop3: detected capacity change from 0 to 139904 Oct 9 07:42:06.134658 kernel: loop4: detected capacity change from 0 to 80568 Oct 9 07:42:06.175987 kernel: loop5: detected capacity change from 0 to 211296 Oct 9 07:42:06.228658 kernel: loop6: detected capacity change from 0 to 8 Oct 9 07:42:06.235371 kernel: loop7: detected capacity change from 0 to 139904 Oct 9 07:42:06.298010 (sd-merge)[1280]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Oct 9 07:42:06.299833 (sd-merge)[1280]: Merged extensions into '/usr'. Oct 9 07:42:06.317192 systemd[1]: Reloading requested from client PID 1266 ('systemd-sysext') (unit systemd-sysext.service)... Oct 9 07:42:06.317235 systemd[1]: Reloading... Oct 9 07:42:06.386798 zram_generator::config[1309]: No configuration found. Oct 9 07:42:06.561386 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 07:42:06.587603 systemd-networkd[1211]: eth0: Gained IPv6LL Oct 9 07:42:06.634197 systemd[1]: Reloading finished in 315 ms. Oct 9 07:42:06.646322 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 9 07:42:06.650909 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 9 07:42:06.662772 systemd[1]: Starting ensure-sysext.service... Oct 9 07:42:06.677823 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Oct 9 07:42:06.686694 systemd[1]: Reloading requested from client PID 1369 ('systemctl') (unit ensure-sysext.service)... Oct 9 07:42:06.686712 systemd[1]: Reloading... Oct 9 07:42:06.706340 systemd-tmpfiles[1370]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 9 07:42:06.706685 systemd-tmpfiles[1370]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 9 07:42:06.707498 systemd-tmpfiles[1370]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 9 07:42:06.708896 systemd-tmpfiles[1370]: ACLs are not supported, ignoring. Oct 9 07:42:06.709037 systemd-tmpfiles[1370]: ACLs are not supported, ignoring. Oct 9 07:42:06.711295 ldconfig[1263]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 9 07:42:06.712974 systemd-tmpfiles[1370]: Detected autofs mount point /boot during canonicalization of boot. Oct 9 07:42:06.712988 systemd-tmpfiles[1370]: Skipping /boot Oct 9 07:42:06.723055 systemd-tmpfiles[1370]: Detected autofs mount point /boot during canonicalization of boot. Oct 9 07:42:06.723073 systemd-tmpfiles[1370]: Skipping /boot Oct 9 07:42:06.756599 zram_generator::config[1398]: No configuration found. Oct 9 07:42:06.899729 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 07:42:06.965540 systemd[1]: Reloading finished in 278 ms. Oct 9 07:42:06.980407 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 9 07:42:06.983230 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Oct 9 07:42:06.995834 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 9 07:42:07.000730 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 9 07:42:07.009690 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 9 07:42:07.025742 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 9 07:42:07.031704 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 9 07:42:07.044426 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:42:07.044826 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 07:42:07.050926 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 07:42:07.062832 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 07:42:07.073067 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 07:42:07.075343 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 07:42:07.075488 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:42:07.081905 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:42:07.082098 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 07:42:07.082251 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 07:42:07.082350 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:42:07.087149 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:42:07.087367 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 07:42:07.090724 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 9 07:42:07.093417 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 07:42:07.093607 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:42:07.094308 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 07:42:07.094466 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 07:42:07.099909 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 07:42:07.100093 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 07:42:07.106421 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 07:42:07.106913 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 07:42:07.118728 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 9 07:42:07.122231 systemd[1]: Finished ensure-sysext.service. Oct 9 07:42:07.131948 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 9 07:42:07.132115 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 9 07:42:07.137589 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 9 07:42:07.137839 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 9 07:42:07.145518 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 9 07:42:07.158679 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 9 07:42:07.159682 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 9 07:42:07.173570 augenrules[1505]: No rules Oct 9 07:42:07.175945 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 9 07:42:07.195480 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 9 07:42:07.199456 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 9 07:42:07.201705 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 9 07:42:07.219675 systemd-resolved[1467]: Positive Trust Anchors: Oct 9 07:42:07.219691 systemd-resolved[1467]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 9 07:42:07.219732 systemd-resolved[1467]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Oct 9 07:42:07.224770 systemd-resolved[1467]: Using system hostname 'ci-3975-2-2-3-e7db599e29.novalocal'. Oct 9 07:42:07.226314 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 9 07:42:07.228279 systemd[1]: Reached target network.target - Network. Oct 9 07:42:07.228721 systemd[1]: Reached target network-online.target - Network is Online. Oct 9 07:42:07.229089 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 9 07:42:07.247919 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 9 07:42:07.248465 systemd[1]: Reached target sysinit.target - System Initialization. Oct 9 07:42:07.248948 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 9 07:42:07.249380 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 9 07:42:07.252174 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 9 07:42:07.252659 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 9 07:42:07.252696 systemd[1]: Reached target paths.target - Path Units. Oct 9 07:42:07.253090 systemd[1]: Reached target time-set.target - System Time Set. Oct 9 07:42:07.254725 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 9 07:42:07.255264 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 9 07:42:07.255654 systemd[1]: Reached target timers.target - Timer Units. Oct 9 07:42:07.257817 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 9 07:42:07.261205 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 9 07:42:07.264826 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 9 07:42:07.270950 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 9 07:42:07.271633 systemd[1]: Reached target sockets.target - Socket Units. Oct 9 07:42:07.272084 systemd[1]: Reached target basic.target - Basic System. Oct 9 07:42:07.273758 systemd[1]: System is tainted: cgroupsv1 Oct 9 07:42:07.273796 systemd-timesyncd[1499]: Contacted time server 162.159.200.123:123 (0.flatcar.pool.ntp.org). Oct 9 07:42:07.273827 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 9 07:42:07.273848 systemd-timesyncd[1499]: Initial clock synchronization to Wed 2024-10-09 07:42:07.326033 UTC. Oct 9 07:42:07.273852 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 9 07:42:07.280614 systemd[1]: Starting containerd.service - containerd container runtime... Oct 9 07:42:07.284683 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Oct 9 07:42:07.293759 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 9 07:42:07.297137 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 9 07:42:07.306736 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 9 07:42:07.309567 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 9 07:42:07.311723 jq[1523]: false Oct 9 07:42:07.321733 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:42:07.331676 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 9 07:42:07.335732 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 9 07:42:07.337494 dbus-daemon[1522]: [system] SELinux support is enabled Oct 9 07:42:07.348718 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 9 07:42:07.356715 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 9 07:42:07.366790 extend-filesystems[1526]: Found loop4 Oct 9 07:42:07.366790 extend-filesystems[1526]: Found loop5 Oct 9 07:42:07.366790 extend-filesystems[1526]: Found loop6 Oct 9 07:42:07.366790 extend-filesystems[1526]: Found loop7 Oct 9 07:42:07.366790 extend-filesystems[1526]: Found vda Oct 9 07:42:07.366790 extend-filesystems[1526]: Found vda1 Oct 9 07:42:07.381487 extend-filesystems[1526]: Found vda2 Oct 9 07:42:07.381487 extend-filesystems[1526]: Found vda3 Oct 9 07:42:07.381487 extend-filesystems[1526]: Found usr Oct 9 07:42:07.381487 extend-filesystems[1526]: Found vda4 Oct 9 07:42:07.381487 extend-filesystems[1526]: Found vda6 Oct 9 07:42:07.381487 extend-filesystems[1526]: Found vda7 Oct 9 07:42:07.381487 extend-filesystems[1526]: Found vda9 Oct 9 07:42:07.381487 extend-filesystems[1526]: Checking size of /dev/vda9 Oct 9 07:42:07.368706 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 9 07:42:07.390632 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 9 07:42:07.392047 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 9 07:42:07.403667 systemd[1]: Starting update-engine.service - Update Engine... Oct 9 07:42:07.414661 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 9 07:42:07.422968 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 9 07:42:07.428601 extend-filesystems[1526]: Resized partition /dev/vda9 Oct 9 07:42:07.441890 jq[1554]: true Oct 9 07:42:07.452792 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 4635643 blocks Oct 9 07:42:07.436747 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 9 07:42:07.452957 extend-filesystems[1558]: resize2fs 1.47.0 (5-Feb-2023) Oct 9 07:42:07.437039 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 9 07:42:07.453794 systemd[1]: motdgen.service: Deactivated successfully. Oct 9 07:42:07.454092 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 9 07:42:07.461825 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 9 07:42:07.467960 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 9 07:42:07.468191 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 9 07:42:07.479545 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1218) Oct 9 07:42:07.480987 update_engine[1545]: I1009 07:42:07.480650 1545 main.cc:92] Flatcar Update Engine starting Oct 9 07:42:07.482264 update_engine[1545]: I1009 07:42:07.482241 1545 update_check_scheduler.cc:74] Next update check in 4m41s Oct 9 07:42:07.501867 (ntainerd)[1575]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 9 07:42:07.510142 jq[1574]: true Oct 9 07:42:07.545212 systemd[1]: Started update-engine.service - Update Engine. Oct 9 07:42:07.554470 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 9 07:42:07.554879 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 9 07:42:07.559669 tar[1572]: linux-amd64/helm Oct 9 07:42:07.560479 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 9 07:42:07.560510 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 9 07:42:07.561724 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 9 07:42:07.571302 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 9 07:42:07.674940 systemd-logind[1543]: New seat seat0. Oct 9 07:42:07.699024 systemd-logind[1543]: Watching system buttons on /dev/input/event1 (Power Button) Oct 9 07:42:07.699050 systemd-logind[1543]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 9 07:42:07.699300 systemd[1]: Started systemd-logind.service - User Login Management. Oct 9 07:42:07.729028 kernel: EXT4-fs (vda9): resized filesystem to 4635643 Oct 9 07:42:07.740027 locksmithd[1594]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 9 07:42:07.813968 extend-filesystems[1558]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 9 07:42:07.813968 extend-filesystems[1558]: old_desc_blocks = 1, new_desc_blocks = 3 Oct 9 07:42:07.813968 extend-filesystems[1558]: The filesystem on /dev/vda9 is now 4635643 (4k) blocks long. Oct 9 07:42:07.817177 extend-filesystems[1526]: Resized filesystem in /dev/vda9 Oct 9 07:42:07.817927 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 9 07:42:07.818262 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 9 07:42:07.828007 bash[1604]: Updated "/home/core/.ssh/authorized_keys" Oct 9 07:42:07.829336 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 9 07:42:07.843768 systemd[1]: Starting sshkeys.service... Oct 9 07:42:07.886822 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Oct 9 07:42:07.896341 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Oct 9 07:42:07.951149 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 9 07:42:08.103932 containerd[1575]: time="2024-10-09T07:42:08.103817297Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Oct 9 07:42:08.160865 containerd[1575]: time="2024-10-09T07:42:08.160805043Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Oct 9 07:42:08.161318 containerd[1575]: time="2024-10-09T07:42:08.161032347Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 9 07:42:08.164587 containerd[1575]: time="2024-10-09T07:42:08.164472254Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.54-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 9 07:42:08.164823 containerd[1575]: time="2024-10-09T07:42:08.164681507Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 9 07:42:08.165908 containerd[1575]: time="2024-10-09T07:42:08.165884743Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 07:42:08.166488 containerd[1575]: time="2024-10-09T07:42:08.165970371Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 9 07:42:08.166488 containerd[1575]: time="2024-10-09T07:42:08.166066516Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Oct 9 07:42:08.166488 containerd[1575]: time="2024-10-09T07:42:08.166126629Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 07:42:08.166488 containerd[1575]: time="2024-10-09T07:42:08.166141957Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 9 07:42:08.166488 containerd[1575]: time="2024-10-09T07:42:08.166212246Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 9 07:42:08.166488 containerd[1575]: time="2024-10-09T07:42:08.166399827Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 9 07:42:08.166488 containerd[1575]: time="2024-10-09T07:42:08.166418836Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Oct 9 07:42:08.166488 containerd[1575]: time="2024-10-09T07:42:08.166430231Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 9 07:42:08.166937 containerd[1575]: time="2024-10-09T07:42:08.166916947Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 07:42:08.167044 containerd[1575]: time="2024-10-09T07:42:08.167028936Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 9 07:42:08.167152 containerd[1575]: time="2024-10-09T07:42:08.167134802Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Oct 9 07:42:08.168140 containerd[1575]: time="2024-10-09T07:42:08.167988330Z" level=info msg="metadata content store policy set" policy=shared Oct 9 07:42:08.189647 containerd[1575]: time="2024-10-09T07:42:08.189608257Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 9 07:42:08.189790 containerd[1575]: time="2024-10-09T07:42:08.189759776Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 9 07:42:08.192550 containerd[1575]: time="2024-10-09T07:42:08.190920416Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 9 07:42:08.192550 containerd[1575]: time="2024-10-09T07:42:08.190973823Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Oct 9 07:42:08.192550 containerd[1575]: time="2024-10-09T07:42:08.190994859Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Oct 9 07:42:08.192550 containerd[1575]: time="2024-10-09T07:42:08.191007656Z" level=info msg="NRI interface is disabled by configuration." Oct 9 07:42:08.192550 containerd[1575]: time="2024-10-09T07:42:08.191021855Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 9 07:42:08.192550 containerd[1575]: time="2024-10-09T07:42:08.191167283Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Oct 9 07:42:08.192550 containerd[1575]: time="2024-10-09T07:42:08.191187240Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Oct 9 07:42:08.192550 containerd[1575]: time="2024-10-09T07:42:08.191202457Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Oct 9 07:42:08.192550 containerd[1575]: time="2024-10-09T07:42:08.191218845Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Oct 9 07:42:08.192550 containerd[1575]: time="2024-10-09T07:42:08.191236291Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 9 07:42:08.192550 containerd[1575]: time="2024-10-09T07:42:08.191255925Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 9 07:42:08.192550 containerd[1575]: time="2024-10-09T07:42:08.191271667Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 9 07:42:08.192550 containerd[1575]: time="2024-10-09T07:42:08.191287177Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 9 07:42:08.192550 containerd[1575]: time="2024-10-09T07:42:08.191304250Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 9 07:42:08.192865 containerd[1575]: time="2024-10-09T07:42:08.191320385Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 9 07:42:08.192865 containerd[1575]: time="2024-10-09T07:42:08.191337276Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 9 07:42:08.192865 containerd[1575]: time="2024-10-09T07:42:08.191350326Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 9 07:42:08.192865 containerd[1575]: time="2024-10-09T07:42:08.191457292Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 9 07:42:08.192865 containerd[1575]: time="2024-10-09T07:42:08.191835953Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 9 07:42:08.192865 containerd[1575]: time="2024-10-09T07:42:08.191863857Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 9 07:42:08.192865 containerd[1575]: time="2024-10-09T07:42:08.191878751Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Oct 9 07:42:08.192865 containerd[1575]: time="2024-10-09T07:42:08.191902167Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 9 07:42:08.192865 containerd[1575]: time="2024-10-09T07:42:08.191975804Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 9 07:42:08.192865 containerd[1575]: time="2024-10-09T07:42:08.191996779Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 9 07:42:08.192865 containerd[1575]: time="2024-10-09T07:42:08.192010111Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 9 07:42:08.192865 containerd[1575]: time="2024-10-09T07:42:08.192023796Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 9 07:42:08.192865 containerd[1575]: time="2024-10-09T07:42:08.192037551Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 9 07:42:08.192865 containerd[1575]: time="2024-10-09T07:42:08.192051669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 9 07:42:08.193136 containerd[1575]: time="2024-10-09T07:42:08.192064648Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 9 07:42:08.193136 containerd[1575]: time="2024-10-09T07:42:08.192077717Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 9 07:42:08.193136 containerd[1575]: time="2024-10-09T07:42:08.192091835Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 9 07:42:08.193136 containerd[1575]: time="2024-10-09T07:42:08.192223256Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Oct 9 07:42:08.193136 containerd[1575]: time="2024-10-09T07:42:08.192243980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Oct 9 07:42:08.193136 containerd[1575]: time="2024-10-09T07:42:08.192259348Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 9 07:42:08.193136 containerd[1575]: time="2024-10-09T07:42:08.192274223Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Oct 9 07:42:08.193136 containerd[1575]: time="2024-10-09T07:42:08.192288754Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 9 07:42:08.193136 containerd[1575]: time="2024-10-09T07:42:08.192302944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Oct 9 07:42:08.193136 containerd[1575]: time="2024-10-09T07:42:08.192316003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 9 07:42:08.193136 containerd[1575]: time="2024-10-09T07:42:08.192329032Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 9 07:42:08.194170 containerd[1575]: time="2024-10-09T07:42:08.194093065Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 9 07:42:08.195342 containerd[1575]: time="2024-10-09T07:42:08.194334628Z" level=info msg="Connect containerd service" Oct 9 07:42:08.195342 containerd[1575]: time="2024-10-09T07:42:08.194370508Z" level=info msg="using legacy CRI server" Oct 9 07:42:08.195342 containerd[1575]: time="2024-10-09T07:42:08.194379312Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 9 07:42:08.195467 containerd[1575]: time="2024-10-09T07:42:08.195447437Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 9 07:42:08.196381 containerd[1575]: time="2024-10-09T07:42:08.196113616Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 9 07:42:08.196381 containerd[1575]: time="2024-10-09T07:42:08.196153913Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 9 07:42:08.196381 containerd[1575]: time="2024-10-09T07:42:08.196174375Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Oct 9 07:42:08.196381 containerd[1575]: time="2024-10-09T07:42:08.196189008Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 9 07:42:08.196381 containerd[1575]: time="2024-10-09T07:42:08.196203620Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Oct 9 07:42:08.197306 containerd[1575]: time="2024-10-09T07:42:08.197286125Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 9 07:42:08.197417 containerd[1575]: time="2024-10-09T07:42:08.197402298Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 9 07:42:08.197548 containerd[1575]: time="2024-10-09T07:42:08.197501166Z" level=info msg="Start subscribing containerd event" Oct 9 07:42:08.197619 containerd[1575]: time="2024-10-09T07:42:08.197605822Z" level=info msg="Start recovering state" Oct 9 07:42:08.197729 containerd[1575]: time="2024-10-09T07:42:08.197715905Z" level=info msg="Start event monitor" Oct 9 07:42:08.198134 containerd[1575]: time="2024-10-09T07:42:08.197798718Z" level=info msg="Start snapshots syncer" Oct 9 07:42:08.198134 containerd[1575]: time="2024-10-09T07:42:08.197814228Z" level=info msg="Start cni network conf syncer for default" Oct 9 07:42:08.198134 containerd[1575]: time="2024-10-09T07:42:08.197835304Z" level=info msg="Start streaming server" Oct 9 07:42:08.197980 systemd[1]: Started containerd.service - containerd container runtime. Oct 9 07:42:08.201976 containerd[1575]: time="2024-10-09T07:42:08.201945022Z" level=info msg="containerd successfully booted in 0.102413s" Oct 9 07:42:08.451907 sshd_keygen[1573]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 9 07:42:08.490632 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 9 07:42:08.499986 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 9 07:42:08.504897 systemd[1]: Started sshd@0-172.24.4.70:22-172.24.4.1:51434.service - OpenSSH per-connection server daemon (172.24.4.1:51434). Oct 9 07:42:08.523762 tar[1572]: linux-amd64/LICENSE Oct 9 07:42:08.523762 tar[1572]: linux-amd64/README.md Oct 9 07:42:08.534598 systemd[1]: issuegen.service: Deactivated successfully. Oct 9 07:42:08.534822 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 9 07:42:08.539416 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 9 07:42:08.553860 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 9 07:42:08.564590 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 9 07:42:08.574622 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 9 07:42:08.578843 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 9 07:42:08.583959 systemd[1]: Reached target getty.target - Login Prompts. Oct 9 07:42:09.451849 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:42:09.468443 (kubelet)[1663]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 07:42:10.043700 sshd[1638]: Accepted publickey for core from 172.24.4.1 port 51434 ssh2: RSA SHA256:iTqmmSA9RcIkWmF7myyFAqWL8kdaKMdVpBUk8UaNQPM Oct 9 07:42:10.048484 sshd[1638]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:42:10.073004 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 9 07:42:10.083261 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 9 07:42:10.090448 systemd-logind[1543]: New session 1 of user core. Oct 9 07:42:10.107701 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 9 07:42:10.125810 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 9 07:42:10.142382 (systemd)[1673]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:42:10.289356 systemd[1673]: Queued start job for default target default.target. Oct 9 07:42:10.289759 systemd[1673]: Created slice app.slice - User Application Slice. Oct 9 07:42:10.289781 systemd[1673]: Reached target paths.target - Paths. Oct 9 07:42:10.289798 systemd[1673]: Reached target timers.target - Timers. Oct 9 07:42:10.293609 systemd[1673]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 9 07:42:10.317039 systemd[1673]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 9 07:42:10.317181 systemd[1673]: Reached target sockets.target - Sockets. Oct 9 07:42:10.318803 systemd[1673]: Reached target basic.target - Basic System. Oct 9 07:42:10.318856 systemd[1673]: Reached target default.target - Main User Target. Oct 9 07:42:10.318899 systemd[1673]: Startup finished in 162ms. Oct 9 07:42:10.318957 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 9 07:42:10.327889 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 9 07:42:10.806566 kubelet[1663]: E1009 07:42:10.806284 1663 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 07:42:10.812103 systemd[1]: Started sshd@1-172.24.4.70:22-172.24.4.1:51450.service - OpenSSH per-connection server daemon (172.24.4.1:51450). Oct 9 07:42:10.813262 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 07:42:10.813398 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 07:42:13.521002 sshd[1687]: Accepted publickey for core from 172.24.4.1 port 51450 ssh2: RSA SHA256:iTqmmSA9RcIkWmF7myyFAqWL8kdaKMdVpBUk8UaNQPM Oct 9 07:42:13.523654 sshd[1687]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:42:13.536073 systemd-logind[1543]: New session 2 of user core. Oct 9 07:42:13.547170 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 9 07:42:13.628113 login[1652]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Oct 9 07:42:13.629693 login[1653]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Oct 9 07:42:13.640641 systemd-logind[1543]: New session 4 of user core. Oct 9 07:42:13.650977 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 9 07:42:13.658825 systemd-logind[1543]: New session 3 of user core. Oct 9 07:42:13.671292 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 9 07:42:14.372096 coreos-metadata[1521]: Oct 09 07:42:14.371 WARN failed to locate config-drive, using the metadata service API instead Oct 9 07:42:14.421303 coreos-metadata[1521]: Oct 09 07:42:14.421 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Oct 9 07:42:14.443912 sshd[1687]: pam_unix(sshd:session): session closed for user core Oct 9 07:42:14.451780 systemd[1]: sshd@1-172.24.4.70:22-172.24.4.1:51450.service: Deactivated successfully. Oct 9 07:42:14.457861 systemd[1]: session-2.scope: Deactivated successfully. Oct 9 07:42:14.460374 systemd-logind[1543]: Session 2 logged out. Waiting for processes to exit. Oct 9 07:42:14.472246 systemd[1]: Started sshd@2-172.24.4.70:22-172.24.4.1:51452.service - OpenSSH per-connection server daemon (172.24.4.1:51452). Oct 9 07:42:14.475779 systemd-logind[1543]: Removed session 2. Oct 9 07:42:14.643809 coreos-metadata[1521]: Oct 09 07:42:14.643 INFO Fetch successful Oct 9 07:42:14.643809 coreos-metadata[1521]: Oct 09 07:42:14.643 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Oct 9 07:42:14.659020 coreos-metadata[1521]: Oct 09 07:42:14.658 INFO Fetch successful Oct 9 07:42:14.659020 coreos-metadata[1521]: Oct 09 07:42:14.658 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Oct 9 07:42:14.677495 coreos-metadata[1521]: Oct 09 07:42:14.677 INFO Fetch successful Oct 9 07:42:14.677495 coreos-metadata[1521]: Oct 09 07:42:14.677 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Oct 9 07:42:14.696144 coreos-metadata[1521]: Oct 09 07:42:14.695 INFO Fetch successful Oct 9 07:42:14.696144 coreos-metadata[1521]: Oct 09 07:42:14.695 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Oct 9 07:42:14.711501 coreos-metadata[1521]: Oct 09 07:42:14.711 INFO Fetch successful Oct 9 07:42:14.711501 coreos-metadata[1521]: Oct 09 07:42:14.711 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Oct 9 07:42:14.722463 coreos-metadata[1521]: Oct 09 07:42:14.722 INFO Fetch successful Oct 9 07:42:14.771402 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Oct 9 07:42:14.774437 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 9 07:42:14.991058 coreos-metadata[1620]: Oct 09 07:42:14.990 WARN failed to locate config-drive, using the metadata service API instead Oct 9 07:42:15.034824 coreos-metadata[1620]: Oct 09 07:42:15.034 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Oct 9 07:42:15.052878 coreos-metadata[1620]: Oct 09 07:42:15.052 INFO Fetch successful Oct 9 07:42:15.052878 coreos-metadata[1620]: Oct 09 07:42:15.052 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Oct 9 07:42:15.067628 coreos-metadata[1620]: Oct 09 07:42:15.067 INFO Fetch successful Oct 9 07:42:15.072921 unknown[1620]: wrote ssh authorized keys file for user: core Oct 9 07:42:15.105246 update-ssh-keys[1732]: Updated "/home/core/.ssh/authorized_keys" Oct 9 07:42:15.110681 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Oct 9 07:42:15.116269 systemd[1]: Finished sshkeys.service. Oct 9 07:42:15.127458 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 9 07:42:15.128117 systemd[1]: Startup finished in 16.352s (kernel) + 12.226s (userspace) = 28.579s. Oct 9 07:42:15.694103 sshd[1720]: Accepted publickey for core from 172.24.4.1 port 51452 ssh2: RSA SHA256:iTqmmSA9RcIkWmF7myyFAqWL8kdaKMdVpBUk8UaNQPM Oct 9 07:42:15.696603 sshd[1720]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:42:15.707914 systemd-logind[1543]: New session 5 of user core. Oct 9 07:42:15.711034 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 9 07:42:16.336054 sshd[1720]: pam_unix(sshd:session): session closed for user core Oct 9 07:42:16.342740 systemd-logind[1543]: Session 5 logged out. Waiting for processes to exit. Oct 9 07:42:16.343677 systemd[1]: sshd@2-172.24.4.70:22-172.24.4.1:51452.service: Deactivated successfully. Oct 9 07:42:16.351062 systemd[1]: session-5.scope: Deactivated successfully. Oct 9 07:42:16.354237 systemd-logind[1543]: Removed session 5. Oct 9 07:42:21.064499 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 9 07:42:21.070884 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:42:21.506765 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:42:21.512978 (kubelet)[1757]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 07:42:22.215776 kubelet[1757]: E1009 07:42:22.215631 1757 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 07:42:22.226916 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 07:42:22.227336 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 07:42:26.362113 systemd[1]: Started sshd@3-172.24.4.70:22-172.24.4.1:44454.service - OpenSSH per-connection server daemon (172.24.4.1:44454). Oct 9 07:42:27.700653 sshd[1768]: Accepted publickey for core from 172.24.4.1 port 44454 ssh2: RSA SHA256:iTqmmSA9RcIkWmF7myyFAqWL8kdaKMdVpBUk8UaNQPM Oct 9 07:42:27.702753 sshd[1768]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:42:27.709142 systemd-logind[1543]: New session 6 of user core. Oct 9 07:42:27.717110 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 9 07:42:28.356413 sshd[1768]: pam_unix(sshd:session): session closed for user core Oct 9 07:42:28.367044 systemd[1]: Started sshd@4-172.24.4.70:22-172.24.4.1:44470.service - OpenSSH per-connection server daemon (172.24.4.1:44470). Oct 9 07:42:28.367973 systemd[1]: sshd@3-172.24.4.70:22-172.24.4.1:44454.service: Deactivated successfully. Oct 9 07:42:28.378277 systemd[1]: session-6.scope: Deactivated successfully. Oct 9 07:42:28.379972 systemd-logind[1543]: Session 6 logged out. Waiting for processes to exit. Oct 9 07:42:28.383660 systemd-logind[1543]: Removed session 6. Oct 9 07:42:29.932611 sshd[1773]: Accepted publickey for core from 172.24.4.1 port 44470 ssh2: RSA SHA256:iTqmmSA9RcIkWmF7myyFAqWL8kdaKMdVpBUk8UaNQPM Oct 9 07:42:29.934661 sshd[1773]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:42:29.942109 systemd-logind[1543]: New session 7 of user core. Oct 9 07:42:29.951960 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 9 07:42:30.645992 sshd[1773]: pam_unix(sshd:session): session closed for user core Oct 9 07:42:30.659108 systemd[1]: Started sshd@5-172.24.4.70:22-172.24.4.1:44482.service - OpenSSH per-connection server daemon (172.24.4.1:44482). Oct 9 07:42:30.660160 systemd[1]: sshd@4-172.24.4.70:22-172.24.4.1:44470.service: Deactivated successfully. Oct 9 07:42:30.674379 systemd[1]: session-7.scope: Deactivated successfully. Oct 9 07:42:30.676734 systemd-logind[1543]: Session 7 logged out. Waiting for processes to exit. Oct 9 07:42:30.680786 systemd-logind[1543]: Removed session 7. Oct 9 07:42:32.184156 sshd[1781]: Accepted publickey for core from 172.24.4.1 port 44482 ssh2: RSA SHA256:iTqmmSA9RcIkWmF7myyFAqWL8kdaKMdVpBUk8UaNQPM Oct 9 07:42:32.187202 sshd[1781]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:42:32.197949 systemd-logind[1543]: New session 8 of user core. Oct 9 07:42:32.212586 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 9 07:42:32.261369 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 9 07:42:32.270815 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:42:32.634816 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:42:32.653080 (kubelet)[1800]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 07:42:32.825914 kubelet[1800]: E1009 07:42:32.825815 1800 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 07:42:32.829979 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 07:42:32.830298 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 07:42:32.912220 sshd[1781]: pam_unix(sshd:session): session closed for user core Oct 9 07:42:32.924267 systemd[1]: Started sshd@6-172.24.4.70:22-172.24.4.1:44484.service - OpenSSH per-connection server daemon (172.24.4.1:44484). Oct 9 07:42:32.927224 systemd[1]: sshd@5-172.24.4.70:22-172.24.4.1:44482.service: Deactivated successfully. Oct 9 07:42:32.939029 systemd[1]: session-8.scope: Deactivated successfully. Oct 9 07:42:32.941085 systemd-logind[1543]: Session 8 logged out. Waiting for processes to exit. Oct 9 07:42:32.946259 systemd-logind[1543]: Removed session 8. Oct 9 07:42:34.305514 sshd[1811]: Accepted publickey for core from 172.24.4.1 port 44484 ssh2: RSA SHA256:iTqmmSA9RcIkWmF7myyFAqWL8kdaKMdVpBUk8UaNQPM Oct 9 07:42:34.308281 sshd[1811]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:42:34.319261 systemd-logind[1543]: New session 9 of user core. Oct 9 07:42:34.326137 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 9 07:42:34.781064 sudo[1817]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 9 07:42:34.781756 sudo[1817]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 9 07:42:34.808075 sudo[1817]: pam_unix(sudo:session): session closed for user root Oct 9 07:42:35.037941 sshd[1811]: pam_unix(sshd:session): session closed for user core Oct 9 07:42:35.051103 systemd[1]: Started sshd@7-172.24.4.70:22-172.24.4.1:39276.service - OpenSSH per-connection server daemon (172.24.4.1:39276). Oct 9 07:42:35.052166 systemd[1]: sshd@6-172.24.4.70:22-172.24.4.1:44484.service: Deactivated successfully. Oct 9 07:42:35.068265 systemd[1]: session-9.scope: Deactivated successfully. Oct 9 07:42:35.069651 systemd-logind[1543]: Session 9 logged out. Waiting for processes to exit. Oct 9 07:42:35.073791 systemd-logind[1543]: Removed session 9. Oct 9 07:42:36.403034 sshd[1819]: Accepted publickey for core from 172.24.4.1 port 39276 ssh2: RSA SHA256:iTqmmSA9RcIkWmF7myyFAqWL8kdaKMdVpBUk8UaNQPM Oct 9 07:42:36.405737 sshd[1819]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:42:36.415288 systemd-logind[1543]: New session 10 of user core. Oct 9 07:42:36.427142 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 9 07:42:36.924813 sudo[1827]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 9 07:42:36.925423 sudo[1827]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 9 07:42:36.932118 sudo[1827]: pam_unix(sudo:session): session closed for user root Oct 9 07:42:36.942389 sudo[1826]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 9 07:42:36.942954 sudo[1826]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 9 07:42:36.967026 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Oct 9 07:42:36.983901 auditctl[1830]: No rules Oct 9 07:42:36.984667 systemd[1]: audit-rules.service: Deactivated successfully. Oct 9 07:42:36.985169 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Oct 9 07:42:37.002364 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 9 07:42:37.049169 augenrules[1849]: No rules Oct 9 07:42:37.051193 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 9 07:42:37.054804 sudo[1826]: pam_unix(sudo:session): session closed for user root Oct 9 07:42:37.280873 sshd[1819]: pam_unix(sshd:session): session closed for user core Oct 9 07:42:37.292132 systemd[1]: Started sshd@8-172.24.4.70:22-172.24.4.1:39278.service - OpenSSH per-connection server daemon (172.24.4.1:39278). Oct 9 07:42:37.293883 systemd[1]: sshd@7-172.24.4.70:22-172.24.4.1:39276.service: Deactivated successfully. Oct 9 07:42:37.298993 systemd[1]: session-10.scope: Deactivated successfully. Oct 9 07:42:37.301241 systemd-logind[1543]: Session 10 logged out. Waiting for processes to exit. Oct 9 07:42:37.307257 systemd-logind[1543]: Removed session 10. Oct 9 07:42:38.705730 sshd[1855]: Accepted publickey for core from 172.24.4.1 port 39278 ssh2: RSA SHA256:iTqmmSA9RcIkWmF7myyFAqWL8kdaKMdVpBUk8UaNQPM Oct 9 07:42:38.708232 sshd[1855]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:42:38.717223 systemd-logind[1543]: New session 11 of user core. Oct 9 07:42:38.726113 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 9 07:42:39.155651 sudo[1862]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 9 07:42:39.156260 sudo[1862]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 9 07:42:39.393999 (dockerd)[1871]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 9 07:42:39.394047 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 9 07:42:39.914584 dockerd[1871]: time="2024-10-09T07:42:39.913829571Z" level=info msg="Starting up" Oct 9 07:42:39.959889 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2637726341-merged.mount: Deactivated successfully. Oct 9 07:42:40.319782 dockerd[1871]: time="2024-10-09T07:42:40.319725285Z" level=info msg="Loading containers: start." Oct 9 07:42:40.509559 kernel: Initializing XFRM netlink socket Oct 9 07:42:40.613518 systemd-networkd[1211]: docker0: Link UP Oct 9 07:42:40.634303 dockerd[1871]: time="2024-10-09T07:42:40.633950339Z" level=info msg="Loading containers: done." Oct 9 07:42:40.762173 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3578070682-merged.mount: Deactivated successfully. Oct 9 07:42:40.767571 dockerd[1871]: time="2024-10-09T07:42:40.766519798Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 9 07:42:40.767571 dockerd[1871]: time="2024-10-09T07:42:40.766930019Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Oct 9 07:42:40.767571 dockerd[1871]: time="2024-10-09T07:42:40.767173630Z" level=info msg="Daemon has completed initialization" Oct 9 07:42:40.819026 dockerd[1871]: time="2024-10-09T07:42:40.818830090Z" level=info msg="API listen on /run/docker.sock" Oct 9 07:42:40.818931 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 9 07:42:42.405984 containerd[1575]: time="2024-10-09T07:42:42.405822762Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.9\"" Oct 9 07:42:43.012384 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Oct 9 07:42:43.021924 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:42:43.159801 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:42:43.162340 (kubelet)[2018]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 07:42:43.215388 kubelet[2018]: E1009 07:42:43.215333 2018 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 07:42:43.217263 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 07:42:43.217409 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 07:42:44.087215 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount157311779.mount: Deactivated successfully. Oct 9 07:42:46.371824 containerd[1575]: time="2024-10-09T07:42:46.371766823Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:42:46.373164 containerd[1575]: time="2024-10-09T07:42:46.372998071Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.9: active requests=0, bytes read=35213849" Oct 9 07:42:46.374217 containerd[1575]: time="2024-10-09T07:42:46.374164785Z" level=info msg="ImageCreate event name:\"sha256:bc1ec5c2b6c60a3b18e7f54a99f0452c038400ecaaa2576931fd5342a0586abb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:42:46.377410 containerd[1575]: time="2024-10-09T07:42:46.377368850Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b88538e7fdf73583c8670540eec5b3620af75c9ec200434a5815ee7fba5021f3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:42:46.378713 containerd[1575]: time="2024-10-09T07:42:46.378502511Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.9\" with image id \"sha256:bc1ec5c2b6c60a3b18e7f54a99f0452c038400ecaaa2576931fd5342a0586abb\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b88538e7fdf73583c8670540eec5b3620af75c9ec200434a5815ee7fba5021f3\", size \"35210641\" in 3.972627418s" Oct 9 07:42:46.378713 containerd[1575]: time="2024-10-09T07:42:46.378558468Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.9\" returns image reference \"sha256:bc1ec5c2b6c60a3b18e7f54a99f0452c038400ecaaa2576931fd5342a0586abb\"" Oct 9 07:42:46.399953 containerd[1575]: time="2024-10-09T07:42:46.399907827Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.9\"" Oct 9 07:42:48.884190 containerd[1575]: time="2024-10-09T07:42:48.884016004Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:42:48.887148 containerd[1575]: time="2024-10-09T07:42:48.887039552Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.9: active requests=0, bytes read=32208681" Oct 9 07:42:48.888807 containerd[1575]: time="2024-10-09T07:42:48.888671941Z" level=info msg="ImageCreate event name:\"sha256:5abda0d0a9153cd1f90fd828be379f7a16a6c814e6efbbbf31e247e13c3843e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:42:48.896947 containerd[1575]: time="2024-10-09T07:42:48.896846912Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f2f18973ccb6996687d10ba5bd1b8f303e3dd2fed80f831a44d2ac8191e5bb9b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:42:48.900168 containerd[1575]: time="2024-10-09T07:42:48.900098193Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.9\" with image id \"sha256:5abda0d0a9153cd1f90fd828be379f7a16a6c814e6efbbbf31e247e13c3843e5\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f2f18973ccb6996687d10ba5bd1b8f303e3dd2fed80f831a44d2ac8191e5bb9b\", size \"33739229\" in 2.500123969s" Oct 9 07:42:48.900574 containerd[1575]: time="2024-10-09T07:42:48.900323684Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.9\" returns image reference \"sha256:5abda0d0a9153cd1f90fd828be379f7a16a6c814e6efbbbf31e247e13c3843e5\"" Oct 9 07:42:48.951246 containerd[1575]: time="2024-10-09T07:42:48.951068355Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.9\"" Oct 9 07:42:50.953663 containerd[1575]: time="2024-10-09T07:42:50.953433288Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:42:50.956699 containerd[1575]: time="2024-10-09T07:42:50.956589326Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.9: active requests=0, bytes read=17320464" Oct 9 07:42:50.958497 containerd[1575]: time="2024-10-09T07:42:50.958394039Z" level=info msg="ImageCreate event name:\"sha256:059957505b3370d4c57d793e79cc70f9063d7ab75767f7040f5cc85572fe7e8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:42:50.967157 containerd[1575]: time="2024-10-09T07:42:50.967026804Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9c164076eebaefdaebad46a5ccd550e9f38c63588c02d35163c6a09e164ab8a8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:42:50.970646 containerd[1575]: time="2024-10-09T07:42:50.970309966Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.9\" with image id \"sha256:059957505b3370d4c57d793e79cc70f9063d7ab75767f7040f5cc85572fe7e8d\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9c164076eebaefdaebad46a5ccd550e9f38c63588c02d35163c6a09e164ab8a8\", size \"18851030\" in 2.019164493s" Oct 9 07:42:50.970646 containerd[1575]: time="2024-10-09T07:42:50.970402202Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.9\" returns image reference \"sha256:059957505b3370d4c57d793e79cc70f9063d7ab75767f7040f5cc85572fe7e8d\"" Oct 9 07:42:51.025322 containerd[1575]: time="2024-10-09T07:42:51.025224561Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.9\"" Oct 9 07:42:52.519085 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount946419831.mount: Deactivated successfully. Oct 9 07:42:52.561700 update_engine[1545]: I1009 07:42:52.561618 1545 update_attempter.cc:509] Updating boot flags... Oct 9 07:42:52.621290 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2119) Oct 9 07:42:52.681016 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2120) Oct 9 07:42:52.734551 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2120) Oct 9 07:42:53.262079 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Oct 9 07:42:53.270919 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:42:54.077900 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:42:54.096333 (kubelet)[2139]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 07:42:54.183305 containerd[1575]: time="2024-10-09T07:42:54.182953291Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:42:54.185604 containerd[1575]: time="2024-10-09T07:42:54.185415653Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.9: active requests=0, bytes read=28601758" Oct 9 07:42:54.189878 containerd[1575]: time="2024-10-09T07:42:54.189465654Z" level=info msg="ImageCreate event name:\"sha256:dd650d127e51776919ec1622a4469a8b141b2dfee5a33fbc5cb9729372e0dcfa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:42:54.190618 kubelet[2139]: E1009 07:42:54.190516 2139 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 07:42:54.193343 containerd[1575]: time="2024-10-09T07:42:54.192950339Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:124040dbe6b5294352355f5d34c692ecbc940cdc57a8fd06d0f38f76b6138906\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:42:54.193617 containerd[1575]: time="2024-10-09T07:42:54.193577221Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.9\" with image id \"sha256:dd650d127e51776919ec1622a4469a8b141b2dfee5a33fbc5cb9729372e0dcfa\", repo tag \"registry.k8s.io/kube-proxy:v1.29.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:124040dbe6b5294352355f5d34c692ecbc940cdc57a8fd06d0f38f76b6138906\", size \"28600769\" in 3.168281665s" Oct 9 07:42:54.193617 containerd[1575]: time="2024-10-09T07:42:54.193613500Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.9\" returns image reference \"sha256:dd650d127e51776919ec1622a4469a8b141b2dfee5a33fbc5cb9729372e0dcfa\"" Oct 9 07:42:54.195048 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 07:42:54.195205 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 07:42:54.219604 containerd[1575]: time="2024-10-09T07:42:54.219476937Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Oct 9 07:42:54.804920 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1970034412.mount: Deactivated successfully. Oct 9 07:42:55.987137 containerd[1575]: time="2024-10-09T07:42:55.987095724Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:42:55.989511 containerd[1575]: time="2024-10-09T07:42:55.989468053Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Oct 9 07:42:55.991552 containerd[1575]: time="2024-10-09T07:42:55.990950559Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:42:55.993914 containerd[1575]: time="2024-10-09T07:42:55.993889263Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:42:55.995260 containerd[1575]: time="2024-10-09T07:42:55.995219662Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.775700715s" Oct 9 07:42:55.995314 containerd[1575]: time="2024-10-09T07:42:55.995263365Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Oct 9 07:42:56.018655 containerd[1575]: time="2024-10-09T07:42:56.018600297Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Oct 9 07:42:56.544927 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3617062036.mount: Deactivated successfully. Oct 9 07:42:56.554811 containerd[1575]: time="2024-10-09T07:42:56.554686082Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:42:56.557239 containerd[1575]: time="2024-10-09T07:42:56.557170921Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Oct 9 07:42:56.558809 containerd[1575]: time="2024-10-09T07:42:56.558759808Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:42:56.565979 containerd[1575]: time="2024-10-09T07:42:56.565827099Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:42:56.568213 containerd[1575]: time="2024-10-09T07:42:56.567967143Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 549.296864ms" Oct 9 07:42:56.568213 containerd[1575]: time="2024-10-09T07:42:56.568047927Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Oct 9 07:42:56.617878 containerd[1575]: time="2024-10-09T07:42:56.617602331Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Oct 9 07:42:57.277246 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4224611325.mount: Deactivated successfully. Oct 9 07:43:01.322621 containerd[1575]: time="2024-10-09T07:43:01.322439264Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:43:01.326360 containerd[1575]: time="2024-10-09T07:43:01.326251326Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651633" Oct 9 07:43:01.328763 containerd[1575]: time="2024-10-09T07:43:01.328501317Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:43:01.338771 containerd[1575]: time="2024-10-09T07:43:01.338658246Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:43:01.342295 containerd[1575]: time="2024-10-09T07:43:01.342053237Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 4.724381866s" Oct 9 07:43:01.342295 containerd[1575]: time="2024-10-09T07:43:01.342125855Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Oct 9 07:43:04.262828 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Oct 9 07:43:04.274285 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:43:04.753001 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 9 07:43:04.753168 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 9 07:43:04.753745 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:43:04.765188 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:43:04.791744 systemd[1]: Reloading requested from client PID 2327 ('systemctl') (unit session-11.scope)... Oct 9 07:43:04.791761 systemd[1]: Reloading... Oct 9 07:43:04.877590 zram_generator::config[2361]: No configuration found. Oct 9 07:43:05.304578 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 07:43:05.388263 systemd[1]: Reloading finished in 596 ms. Oct 9 07:43:05.448917 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:43:05.452770 (kubelet)[2430]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 9 07:43:05.466290 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:43:05.467027 systemd[1]: kubelet.service: Deactivated successfully. Oct 9 07:43:05.467494 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:43:05.484571 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:43:06.050861 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:43:06.073185 (kubelet)[2451]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 9 07:43:06.251198 kubelet[2451]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 07:43:06.251198 kubelet[2451]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 9 07:43:06.251198 kubelet[2451]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 07:43:06.255855 kubelet[2451]: I1009 07:43:06.254602 2451 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 9 07:43:07.189822 kubelet[2451]: I1009 07:43:07.189766 2451 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Oct 9 07:43:07.189822 kubelet[2451]: I1009 07:43:07.189813 2451 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 9 07:43:07.190192 kubelet[2451]: I1009 07:43:07.190162 2451 server.go:919] "Client rotation is on, will bootstrap in background" Oct 9 07:43:07.228421 kubelet[2451]: E1009 07:43:07.228360 2451 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.70:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.70:6443: connect: connection refused Oct 9 07:43:07.230651 kubelet[2451]: I1009 07:43:07.230406 2451 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 9 07:43:07.255705 kubelet[2451]: I1009 07:43:07.255665 2451 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 9 07:43:07.257503 kubelet[2451]: I1009 07:43:07.257117 2451 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 9 07:43:07.259693 kubelet[2451]: I1009 07:43:07.259490 2451 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 9 07:43:07.261099 kubelet[2451]: I1009 07:43:07.260689 2451 topology_manager.go:138] "Creating topology manager with none policy" Oct 9 07:43:07.261099 kubelet[2451]: I1009 07:43:07.260736 2451 container_manager_linux.go:301] "Creating device plugin manager" Oct 9 07:43:07.261099 kubelet[2451]: I1009 07:43:07.260951 2451 state_mem.go:36] "Initialized new in-memory state store" Oct 9 07:43:07.261424 kubelet[2451]: I1009 07:43:07.261374 2451 kubelet.go:396] "Attempting to sync node with API server" Oct 9 07:43:07.261789 kubelet[2451]: I1009 07:43:07.261764 2451 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 9 07:43:07.261976 kubelet[2451]: I1009 07:43:07.261954 2451 kubelet.go:312] "Adding apiserver pod source" Oct 9 07:43:07.262284 kubelet[2451]: I1009 07:43:07.262097 2451 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 9 07:43:07.266397 kubelet[2451]: W1009 07:43:07.266310 2451 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.24.4.70:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975-2-2-3-e7db599e29.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.70:6443: connect: connection refused Oct 9 07:43:07.267244 kubelet[2451]: E1009 07:43:07.266419 2451 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.70:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975-2-2-3-e7db599e29.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.70:6443: connect: connection refused Oct 9 07:43:07.267244 kubelet[2451]: I1009 07:43:07.266597 2451 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Oct 9 07:43:07.275785 kubelet[2451]: W1009 07:43:07.275391 2451 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.24.4.70:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.70:6443: connect: connection refused Oct 9 07:43:07.275785 kubelet[2451]: E1009 07:43:07.275490 2451 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.70:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.70:6443: connect: connection refused Oct 9 07:43:07.276308 kubelet[2451]: I1009 07:43:07.276248 2451 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 9 07:43:07.278765 kubelet[2451]: W1009 07:43:07.278727 2451 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 9 07:43:07.280485 kubelet[2451]: I1009 07:43:07.280160 2451 server.go:1256] "Started kubelet" Oct 9 07:43:07.280879 kubelet[2451]: I1009 07:43:07.280799 2451 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Oct 9 07:43:07.284430 kubelet[2451]: I1009 07:43:07.283909 2451 server.go:461] "Adding debug handlers to kubelet server" Oct 9 07:43:07.288194 kubelet[2451]: I1009 07:43:07.288162 2451 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 9 07:43:07.288872 kubelet[2451]: I1009 07:43:07.288573 2451 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 9 07:43:07.290571 kubelet[2451]: E1009 07:43:07.290448 2451 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.70:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.70:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3975-2-2-3-e7db599e29.novalocal.17fcb904cd613914 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3975-2-2-3-e7db599e29.novalocal,UID:ci-3975-2-2-3-e7db599e29.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3975-2-2-3-e7db599e29.novalocal,},FirstTimestamp:2024-10-09 07:43:07.280136468 +0000 UTC m=+1.200403647,LastTimestamp:2024-10-09 07:43:07.280136468 +0000 UTC m=+1.200403647,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3975-2-2-3-e7db599e29.novalocal,}" Oct 9 07:43:07.292069 kubelet[2451]: I1009 07:43:07.291668 2451 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 9 07:43:07.293618 kubelet[2451]: I1009 07:43:07.293503 2451 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 9 07:43:07.295572 kubelet[2451]: I1009 07:43:07.294850 2451 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Oct 9 07:43:07.296868 kubelet[2451]: E1009 07:43:07.296851 2451 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.70:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975-2-2-3-e7db599e29.novalocal?timeout=10s\": dial tcp 172.24.4.70:6443: connect: connection refused" interval="200ms" Oct 9 07:43:07.301669 kubelet[2451]: I1009 07:43:07.301644 2451 factory.go:221] Registration of the containerd container factory successfully Oct 9 07:43:07.301801 kubelet[2451]: I1009 07:43:07.301792 2451 factory.go:221] Registration of the systemd container factory successfully Oct 9 07:43:07.301932 kubelet[2451]: I1009 07:43:07.301914 2451 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 9 07:43:07.307011 kubelet[2451]: I1009 07:43:07.306966 2451 reconciler_new.go:29] "Reconciler: start to sync state" Oct 9 07:43:07.326138 kubelet[2451]: E1009 07:43:07.326118 2451 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 9 07:43:07.326766 kubelet[2451]: W1009 07:43:07.326711 2451 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.24.4.70:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.70:6443: connect: connection refused Oct 9 07:43:07.326870 kubelet[2451]: E1009 07:43:07.326860 2451 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.70:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.70:6443: connect: connection refused Oct 9 07:43:07.336107 kubelet[2451]: I1009 07:43:07.335672 2451 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 9 07:43:07.338266 kubelet[2451]: I1009 07:43:07.338246 2451 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 9 07:43:07.339061 kubelet[2451]: I1009 07:43:07.339048 2451 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 9 07:43:07.340389 kubelet[2451]: I1009 07:43:07.340358 2451 kubelet.go:2329] "Starting kubelet main sync loop" Oct 9 07:43:07.340574 kubelet[2451]: E1009 07:43:07.340563 2451 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 9 07:43:07.343873 kubelet[2451]: W1009 07:43:07.343823 2451 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.24.4.70:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.70:6443: connect: connection refused Oct 9 07:43:07.344035 kubelet[2451]: E1009 07:43:07.344022 2451 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.70:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.70:6443: connect: connection refused Oct 9 07:43:07.347491 kubelet[2451]: I1009 07:43:07.347472 2451 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 9 07:43:07.347656 kubelet[2451]: I1009 07:43:07.347648 2451 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 9 07:43:07.347760 kubelet[2451]: I1009 07:43:07.347750 2451 state_mem.go:36] "Initialized new in-memory state store" Oct 9 07:43:07.351770 kubelet[2451]: I1009 07:43:07.351754 2451 policy_none.go:49] "None policy: Start" Oct 9 07:43:07.352683 kubelet[2451]: I1009 07:43:07.352643 2451 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 9 07:43:07.352683 kubelet[2451]: I1009 07:43:07.352668 2451 state_mem.go:35] "Initializing new in-memory state store" Oct 9 07:43:07.357587 kubelet[2451]: I1009 07:43:07.357559 2451 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 9 07:43:07.357830 kubelet[2451]: I1009 07:43:07.357797 2451 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 9 07:43:07.362200 kubelet[2451]: E1009 07:43:07.362150 2451 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3975-2-2-3-e7db599e29.novalocal\" not found" Oct 9 07:43:07.397125 kubelet[2451]: I1009 07:43:07.397082 2451 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975-2-2-3-e7db599e29.novalocal" Oct 9 07:43:07.397682 kubelet[2451]: E1009 07:43:07.397656 2451 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.70:6443/api/v1/nodes\": dial tcp 172.24.4.70:6443: connect: connection refused" node="ci-3975-2-2-3-e7db599e29.novalocal" Oct 9 07:43:07.441475 kubelet[2451]: I1009 07:43:07.441143 2451 topology_manager.go:215] "Topology Admit Handler" podUID="1375f8298899fdb628c3b571d7ae5072" podNamespace="kube-system" podName="kube-apiserver-ci-3975-2-2-3-e7db599e29.novalocal" Oct 9 07:43:07.450242 kubelet[2451]: I1009 07:43:07.450175 2451 topology_manager.go:215] "Topology Admit Handler" podUID="ec59bd404d93358c4dcde4fde27fb74d" podNamespace="kube-system" podName="kube-controller-manager-ci-3975-2-2-3-e7db599e29.novalocal" Oct 9 07:43:07.453248 kubelet[2451]: I1009 07:43:07.453205 2451 topology_manager.go:215] "Topology Admit Handler" podUID="e018aa3382ee84ff49f34558f2cb2be2" podNamespace="kube-system" podName="kube-scheduler-ci-3975-2-2-3-e7db599e29.novalocal" Oct 9 07:43:07.498471 kubelet[2451]: E1009 07:43:07.498429 2451 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.70:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975-2-2-3-e7db599e29.novalocal?timeout=10s\": dial tcp 172.24.4.70:6443: connect: connection refused" interval="400ms" Oct 9 07:43:07.600915 kubelet[2451]: I1009 07:43:07.600854 2451 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975-2-2-3-e7db599e29.novalocal" Oct 9 07:43:07.601513 kubelet[2451]: E1009 07:43:07.601380 2451 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.70:6443/api/v1/nodes\": dial tcp 172.24.4.70:6443: connect: connection refused" node="ci-3975-2-2-3-e7db599e29.novalocal" Oct 9 07:43:07.609087 kubelet[2451]: I1009 07:43:07.609027 2451 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1375f8298899fdb628c3b571d7ae5072-k8s-certs\") pod \"kube-apiserver-ci-3975-2-2-3-e7db599e29.novalocal\" (UID: \"1375f8298899fdb628c3b571d7ae5072\") " pod="kube-system/kube-apiserver-ci-3975-2-2-3-e7db599e29.novalocal" Oct 9 07:43:07.609276 kubelet[2451]: I1009 07:43:07.609110 2451 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1375f8298899fdb628c3b571d7ae5072-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3975-2-2-3-e7db599e29.novalocal\" (UID: \"1375f8298899fdb628c3b571d7ae5072\") " pod="kube-system/kube-apiserver-ci-3975-2-2-3-e7db599e29.novalocal" Oct 9 07:43:07.609276 kubelet[2451]: I1009 07:43:07.609170 2451 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ec59bd404d93358c4dcde4fde27fb74d-ca-certs\") pod \"kube-controller-manager-ci-3975-2-2-3-e7db599e29.novalocal\" (UID: \"ec59bd404d93358c4dcde4fde27fb74d\") " pod="kube-system/kube-controller-manager-ci-3975-2-2-3-e7db599e29.novalocal" Oct 9 07:43:07.609276 kubelet[2451]: I1009 07:43:07.609229 2451 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ec59bd404d93358c4dcde4fde27fb74d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3975-2-2-3-e7db599e29.novalocal\" (UID: \"ec59bd404d93358c4dcde4fde27fb74d\") " pod="kube-system/kube-controller-manager-ci-3975-2-2-3-e7db599e29.novalocal" Oct 9 07:43:07.609470 kubelet[2451]: I1009 07:43:07.609284 2451 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e018aa3382ee84ff49f34558f2cb2be2-kubeconfig\") pod \"kube-scheduler-ci-3975-2-2-3-e7db599e29.novalocal\" (UID: \"e018aa3382ee84ff49f34558f2cb2be2\") " pod="kube-system/kube-scheduler-ci-3975-2-2-3-e7db599e29.novalocal" Oct 9 07:43:07.609470 kubelet[2451]: I1009 07:43:07.609336 2451 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1375f8298899fdb628c3b571d7ae5072-ca-certs\") pod \"kube-apiserver-ci-3975-2-2-3-e7db599e29.novalocal\" (UID: \"1375f8298899fdb628c3b571d7ae5072\") " pod="kube-system/kube-apiserver-ci-3975-2-2-3-e7db599e29.novalocal" Oct 9 07:43:07.609470 kubelet[2451]: I1009 07:43:07.609389 2451 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ec59bd404d93358c4dcde4fde27fb74d-flexvolume-dir\") pod \"kube-controller-manager-ci-3975-2-2-3-e7db599e29.novalocal\" (UID: \"ec59bd404d93358c4dcde4fde27fb74d\") " pod="kube-system/kube-controller-manager-ci-3975-2-2-3-e7db599e29.novalocal" Oct 9 07:43:07.609470 kubelet[2451]: I1009 07:43:07.609467 2451 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ec59bd404d93358c4dcde4fde27fb74d-k8s-certs\") pod \"kube-controller-manager-ci-3975-2-2-3-e7db599e29.novalocal\" (UID: \"ec59bd404d93358c4dcde4fde27fb74d\") " pod="kube-system/kube-controller-manager-ci-3975-2-2-3-e7db599e29.novalocal" Oct 9 07:43:07.609745 kubelet[2451]: I1009 07:43:07.609560 2451 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ec59bd404d93358c4dcde4fde27fb74d-kubeconfig\") pod \"kube-controller-manager-ci-3975-2-2-3-e7db599e29.novalocal\" (UID: \"ec59bd404d93358c4dcde4fde27fb74d\") " pod="kube-system/kube-controller-manager-ci-3975-2-2-3-e7db599e29.novalocal" Oct 9 07:43:07.763460 containerd[1575]: time="2024-10-09T07:43:07.763317756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3975-2-2-3-e7db599e29.novalocal,Uid:ec59bd404d93358c4dcde4fde27fb74d,Namespace:kube-system,Attempt:0,}" Oct 9 07:43:07.773475 containerd[1575]: time="2024-10-09T07:43:07.772913554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3975-2-2-3-e7db599e29.novalocal,Uid:1375f8298899fdb628c3b571d7ae5072,Namespace:kube-system,Attempt:0,}" Oct 9 07:43:07.773475 containerd[1575]: time="2024-10-09T07:43:07.772917381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3975-2-2-3-e7db599e29.novalocal,Uid:e018aa3382ee84ff49f34558f2cb2be2,Namespace:kube-system,Attempt:0,}" Oct 9 07:43:07.899956 kubelet[2451]: E1009 07:43:07.899886 2451 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.70:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975-2-2-3-e7db599e29.novalocal?timeout=10s\": dial tcp 172.24.4.70:6443: connect: connection refused" interval="800ms" Oct 9 07:43:08.005280 kubelet[2451]: I1009 07:43:08.005192 2451 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975-2-2-3-e7db599e29.novalocal" Oct 9 07:43:08.005886 kubelet[2451]: E1009 07:43:08.005850 2451 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.70:6443/api/v1/nodes\": dial tcp 172.24.4.70:6443: connect: connection refused" node="ci-3975-2-2-3-e7db599e29.novalocal" Oct 9 07:43:08.204611 kubelet[2451]: W1009 07:43:08.204362 2451 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.24.4.70:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.70:6443: connect: connection refused Oct 9 07:43:08.204611 kubelet[2451]: E1009 07:43:08.204435 2451 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.70:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.70:6443: connect: connection refused Oct 9 07:43:08.302232 kubelet[2451]: W1009 07:43:08.302116 2451 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.24.4.70:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.70:6443: connect: connection refused Oct 9 07:43:08.302232 kubelet[2451]: E1009 07:43:08.302194 2451 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.70:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.70:6443: connect: connection refused Oct 9 07:43:08.475800 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3188224256.mount: Deactivated successfully. Oct 9 07:43:08.486403 containerd[1575]: time="2024-10-09T07:43:08.486125406Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 07:43:08.490559 containerd[1575]: time="2024-10-09T07:43:08.490454066Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Oct 9 07:43:08.491799 containerd[1575]: time="2024-10-09T07:43:08.491698207Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 07:43:08.494300 containerd[1575]: time="2024-10-09T07:43:08.494216174Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 07:43:08.495596 containerd[1575]: time="2024-10-09T07:43:08.495408477Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 9 07:43:08.497366 containerd[1575]: time="2024-10-09T07:43:08.497310522Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 07:43:08.499022 containerd[1575]: time="2024-10-09T07:43:08.498869628Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 9 07:43:08.509088 containerd[1575]: time="2024-10-09T07:43:08.508941861Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 07:43:08.511927 containerd[1575]: time="2024-10-09T07:43:08.511450190Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 747.905886ms" Oct 9 07:43:08.516220 containerd[1575]: time="2024-10-09T07:43:08.516007573Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 742.403423ms" Oct 9 07:43:08.531997 containerd[1575]: time="2024-10-09T07:43:08.531734490Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 758.620048ms" Oct 9 07:43:08.615515 kubelet[2451]: W1009 07:43:08.615324 2451 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.24.4.70:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975-2-2-3-e7db599e29.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.70:6443: connect: connection refused Oct 9 07:43:08.615515 kubelet[2451]: E1009 07:43:08.615389 2451 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.70:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975-2-2-3-e7db599e29.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.70:6443: connect: connection refused Oct 9 07:43:08.701333 kubelet[2451]: E1009 07:43:08.701287 2451 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.70:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975-2-2-3-e7db599e29.novalocal?timeout=10s\": dial tcp 172.24.4.70:6443: connect: connection refused" interval="1.6s" Oct 9 07:43:08.741710 containerd[1575]: time="2024-10-09T07:43:08.741071107Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:43:08.741710 containerd[1575]: time="2024-10-09T07:43:08.741326390Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:43:08.741710 containerd[1575]: time="2024-10-09T07:43:08.741487103Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:43:08.741710 containerd[1575]: time="2024-10-09T07:43:08.741579818Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:43:08.748471 containerd[1575]: time="2024-10-09T07:43:08.748191432Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:43:08.748639 containerd[1575]: time="2024-10-09T07:43:08.748471782Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:43:08.748639 containerd[1575]: time="2024-10-09T07:43:08.748510024Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:43:08.748639 containerd[1575]: time="2024-10-09T07:43:08.748558956Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:43:08.754681 containerd[1575]: time="2024-10-09T07:43:08.753772598Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:43:08.754681 containerd[1575]: time="2024-10-09T07:43:08.753840196Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:43:08.754681 containerd[1575]: time="2024-10-09T07:43:08.753865263Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:43:08.754681 containerd[1575]: time="2024-10-09T07:43:08.753883598Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:43:08.812160 kubelet[2451]: I1009 07:43:08.812105 2451 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975-2-2-3-e7db599e29.novalocal" Oct 9 07:43:08.812726 kubelet[2451]: E1009 07:43:08.812707 2451 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.70:6443/api/v1/nodes\": dial tcp 172.24.4.70:6443: connect: connection refused" node="ci-3975-2-2-3-e7db599e29.novalocal" Oct 9 07:43:08.858203 containerd[1575]: time="2024-10-09T07:43:08.858159330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3975-2-2-3-e7db599e29.novalocal,Uid:1375f8298899fdb628c3b571d7ae5072,Namespace:kube-system,Attempt:0,} returns sandbox id \"427e3af45c166c0981398f217845df65a74ac01029b06d956665550c1edc2832\"" Oct 9 07:43:08.861227 containerd[1575]: time="2024-10-09T07:43:08.861199755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3975-2-2-3-e7db599e29.novalocal,Uid:ec59bd404d93358c4dcde4fde27fb74d,Namespace:kube-system,Attempt:0,} returns sandbox id \"449ec4110718f7d134b702bf4dde195d50295f061df615a901b8c39f0b436465\"" Oct 9 07:43:08.870409 containerd[1575]: time="2024-10-09T07:43:08.870375163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3975-2-2-3-e7db599e29.novalocal,Uid:e018aa3382ee84ff49f34558f2cb2be2,Namespace:kube-system,Attempt:0,} returns sandbox id \"a4de62a35e0923823670fe59f86af0c0a09608737a79d2730f2cfcfd9e747dff\"" Oct 9 07:43:08.872172 containerd[1575]: time="2024-10-09T07:43:08.871985215Z" level=info msg="CreateContainer within sandbox \"449ec4110718f7d134b702bf4dde195d50295f061df615a901b8c39f0b436465\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 9 07:43:08.872172 containerd[1575]: time="2024-10-09T07:43:08.872077180Z" level=info msg="CreateContainer within sandbox \"427e3af45c166c0981398f217845df65a74ac01029b06d956665550c1edc2832\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 9 07:43:08.874909 kubelet[2451]: W1009 07:43:08.874863 2451 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.24.4.70:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.70:6443: connect: connection refused Oct 9 07:43:08.875027 kubelet[2451]: E1009 07:43:08.875015 2451 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.70:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.70:6443: connect: connection refused Oct 9 07:43:08.875981 containerd[1575]: time="2024-10-09T07:43:08.875960878Z" level=info msg="CreateContainer within sandbox \"a4de62a35e0923823670fe59f86af0c0a09608737a79d2730f2cfcfd9e747dff\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 9 07:43:08.912570 containerd[1575]: time="2024-10-09T07:43:08.912495465Z" level=info msg="CreateContainer within sandbox \"449ec4110718f7d134b702bf4dde195d50295f061df615a901b8c39f0b436465\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a8bdf5c05eb61162a4d63f25a8d2ba1425877c3cdad31f27aa2c1085317e6f5a\"" Oct 9 07:43:08.913486 containerd[1575]: time="2024-10-09T07:43:08.913382951Z" level=info msg="StartContainer for \"a8bdf5c05eb61162a4d63f25a8d2ba1425877c3cdad31f27aa2c1085317e6f5a\"" Oct 9 07:43:08.914396 containerd[1575]: time="2024-10-09T07:43:08.914364667Z" level=info msg="CreateContainer within sandbox \"427e3af45c166c0981398f217845df65a74ac01029b06d956665550c1edc2832\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"493b5834e9e6ae9033d614d3c27f16473ff8e89954dc3ab7d6aefa33028e7f0a\"" Oct 9 07:43:08.915057 containerd[1575]: time="2024-10-09T07:43:08.914878487Z" level=info msg="StartContainer for \"493b5834e9e6ae9033d614d3c27f16473ff8e89954dc3ab7d6aefa33028e7f0a\"" Oct 9 07:43:08.922073 containerd[1575]: time="2024-10-09T07:43:08.922033908Z" level=info msg="CreateContainer within sandbox \"a4de62a35e0923823670fe59f86af0c0a09608737a79d2730f2cfcfd9e747dff\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"90e3659c0cdabdfad184ef5e67e8fe84b214611f89ac97cc933fee44196dd372\"" Oct 9 07:43:08.923691 containerd[1575]: time="2024-10-09T07:43:08.923390442Z" level=info msg="StartContainer for \"90e3659c0cdabdfad184ef5e67e8fe84b214611f89ac97cc933fee44196dd372\"" Oct 9 07:43:09.050320 containerd[1575]: time="2024-10-09T07:43:09.050280241Z" level=info msg="StartContainer for \"a8bdf5c05eb61162a4d63f25a8d2ba1425877c3cdad31f27aa2c1085317e6f5a\" returns successfully" Oct 9 07:43:09.051399 containerd[1575]: time="2024-10-09T07:43:09.050604995Z" level=info msg="StartContainer for \"493b5834e9e6ae9033d614d3c27f16473ff8e89954dc3ab7d6aefa33028e7f0a\" returns successfully" Oct 9 07:43:09.051399 containerd[1575]: time="2024-10-09T07:43:09.050609193Z" level=info msg="StartContainer for \"90e3659c0cdabdfad184ef5e67e8fe84b214611f89ac97cc933fee44196dd372\" returns successfully" Oct 9 07:43:09.292174 kubelet[2451]: E1009 07:43:09.292134 2451 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.70:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.70:6443: connect: connection refused Oct 9 07:43:10.416876 kubelet[2451]: I1009 07:43:10.416830 2451 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975-2-2-3-e7db599e29.novalocal" Oct 9 07:43:11.325789 kubelet[2451]: E1009 07:43:11.325449 2451 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3975-2-2-3-e7db599e29.novalocal\" not found" node="ci-3975-2-2-3-e7db599e29.novalocal" Oct 9 07:43:11.351875 kubelet[2451]: I1009 07:43:11.351020 2451 kubelet_node_status.go:76] "Successfully registered node" node="ci-3975-2-2-3-e7db599e29.novalocal" Oct 9 07:43:12.278925 kubelet[2451]: I1009 07:43:12.278499 2451 apiserver.go:52] "Watching apiserver" Oct 9 07:43:12.308305 kubelet[2451]: I1009 07:43:12.308168 2451 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Oct 9 07:43:14.499509 systemd[1]: Reloading requested from client PID 2725 ('systemctl') (unit session-11.scope)... Oct 9 07:43:14.499581 systemd[1]: Reloading... Oct 9 07:43:14.599552 zram_generator::config[2766]: No configuration found. Oct 9 07:43:14.646831 kubelet[2451]: W1009 07:43:14.646087 2451 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 9 07:43:14.741335 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 07:43:14.830496 systemd[1]: Reloading finished in 330 ms. Oct 9 07:43:14.867383 kubelet[2451]: I1009 07:43:14.867200 2451 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 9 07:43:14.867398 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:43:14.879799 systemd[1]: kubelet.service: Deactivated successfully. Oct 9 07:43:14.880116 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:43:14.887993 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:43:15.187756 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:43:15.198857 (kubelet)[2836]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 9 07:43:15.458117 kubelet[2836]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 07:43:15.458117 kubelet[2836]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 9 07:43:15.458117 kubelet[2836]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 07:43:15.459785 kubelet[2836]: I1009 07:43:15.457681 2836 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 9 07:43:15.464643 kubelet[2836]: I1009 07:43:15.464598 2836 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Oct 9 07:43:15.464643 kubelet[2836]: I1009 07:43:15.464623 2836 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 9 07:43:15.465020 kubelet[2836]: I1009 07:43:15.464866 2836 server.go:919] "Client rotation is on, will bootstrap in background" Oct 9 07:43:15.468303 kubelet[2836]: I1009 07:43:15.467388 2836 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 9 07:43:15.471643 kubelet[2836]: I1009 07:43:15.470412 2836 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 9 07:43:15.488754 kubelet[2836]: I1009 07:43:15.488717 2836 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 9 07:43:15.489601 kubelet[2836]: I1009 07:43:15.489231 2836 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 9 07:43:15.489601 kubelet[2836]: I1009 07:43:15.489567 2836 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 9 07:43:15.489601 kubelet[2836]: I1009 07:43:15.489594 2836 topology_manager.go:138] "Creating topology manager with none policy" Oct 9 07:43:15.489601 kubelet[2836]: I1009 07:43:15.489606 2836 container_manager_linux.go:301] "Creating device plugin manager" Oct 9 07:43:15.490169 kubelet[2836]: I1009 07:43:15.489638 2836 state_mem.go:36] "Initialized new in-memory state store" Oct 9 07:43:15.490169 kubelet[2836]: I1009 07:43:15.489714 2836 kubelet.go:396] "Attempting to sync node with API server" Oct 9 07:43:15.490169 kubelet[2836]: I1009 07:43:15.489731 2836 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 9 07:43:15.490169 kubelet[2836]: I1009 07:43:15.489755 2836 kubelet.go:312] "Adding apiserver pod source" Oct 9 07:43:15.490169 kubelet[2836]: I1009 07:43:15.489771 2836 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 9 07:43:15.500548 kubelet[2836]: I1009 07:43:15.497683 2836 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Oct 9 07:43:15.500548 kubelet[2836]: I1009 07:43:15.497873 2836 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 9 07:43:15.500548 kubelet[2836]: I1009 07:43:15.498306 2836 server.go:1256] "Started kubelet" Oct 9 07:43:15.501808 kubelet[2836]: I1009 07:43:15.501790 2836 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 9 07:43:15.519286 kubelet[2836]: I1009 07:43:15.519264 2836 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Oct 9 07:43:15.520812 kubelet[2836]: I1009 07:43:15.520796 2836 server.go:461] "Adding debug handlers to kubelet server" Oct 9 07:43:15.523360 kubelet[2836]: I1009 07:43:15.523347 2836 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 9 07:43:15.523637 kubelet[2836]: I1009 07:43:15.523625 2836 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 9 07:43:15.525031 kubelet[2836]: I1009 07:43:15.525018 2836 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 9 07:43:15.528644 kubelet[2836]: I1009 07:43:15.525352 2836 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Oct 9 07:43:15.528891 kubelet[2836]: I1009 07:43:15.528855 2836 reconciler_new.go:29] "Reconciler: start to sync state" Oct 9 07:43:15.529902 kubelet[2836]: I1009 07:43:15.529274 2836 factory.go:221] Registration of the systemd container factory successfully Oct 9 07:43:15.530057 kubelet[2836]: I1009 07:43:15.530029 2836 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 9 07:43:15.539272 kubelet[2836]: I1009 07:43:15.539246 2836 factory.go:221] Registration of the containerd container factory successfully Oct 9 07:43:15.547501 kubelet[2836]: I1009 07:43:15.547481 2836 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 9 07:43:15.548429 kubelet[2836]: I1009 07:43:15.548416 2836 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 9 07:43:15.548504 kubelet[2836]: I1009 07:43:15.548495 2836 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 9 07:43:15.548599 kubelet[2836]: I1009 07:43:15.548589 2836 kubelet.go:2329] "Starting kubelet main sync loop" Oct 9 07:43:15.548692 kubelet[2836]: E1009 07:43:15.548683 2836 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 9 07:43:15.603966 kubelet[2836]: I1009 07:43:15.603933 2836 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 9 07:43:15.603966 kubelet[2836]: I1009 07:43:15.603969 2836 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 9 07:43:15.603966 kubelet[2836]: I1009 07:43:15.603985 2836 state_mem.go:36] "Initialized new in-memory state store" Oct 9 07:43:15.604149 kubelet[2836]: I1009 07:43:15.604122 2836 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 9 07:43:15.604181 kubelet[2836]: I1009 07:43:15.604162 2836 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 9 07:43:15.604181 kubelet[2836]: I1009 07:43:15.604170 2836 policy_none.go:49] "None policy: Start" Oct 9 07:43:15.606923 kubelet[2836]: I1009 07:43:15.605636 2836 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 9 07:43:15.606923 kubelet[2836]: I1009 07:43:15.605663 2836 state_mem.go:35] "Initializing new in-memory state store" Oct 9 07:43:15.606923 kubelet[2836]: I1009 07:43:15.605962 2836 state_mem.go:75] "Updated machine memory state" Oct 9 07:43:15.607403 kubelet[2836]: I1009 07:43:15.607374 2836 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 9 07:43:15.609822 kubelet[2836]: I1009 07:43:15.608675 2836 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 9 07:43:15.631055 kubelet[2836]: I1009 07:43:15.630207 2836 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975-2-2-3-e7db599e29.novalocal" Oct 9 07:43:15.642905 kubelet[2836]: I1009 07:43:15.642796 2836 kubelet_node_status.go:112] "Node was previously registered" node="ci-3975-2-2-3-e7db599e29.novalocal" Oct 9 07:43:15.643195 kubelet[2836]: I1009 07:43:15.643063 2836 kubelet_node_status.go:76] "Successfully registered node" node="ci-3975-2-2-3-e7db599e29.novalocal" Oct 9 07:43:15.653191 kubelet[2836]: I1009 07:43:15.652068 2836 topology_manager.go:215] "Topology Admit Handler" podUID="1375f8298899fdb628c3b571d7ae5072" podNamespace="kube-system" podName="kube-apiserver-ci-3975-2-2-3-e7db599e29.novalocal" Oct 9 07:43:15.653191 kubelet[2836]: I1009 07:43:15.652166 2836 topology_manager.go:215] "Topology Admit Handler" podUID="ec59bd404d93358c4dcde4fde27fb74d" podNamespace="kube-system" podName="kube-controller-manager-ci-3975-2-2-3-e7db599e29.novalocal" Oct 9 07:43:15.653191 kubelet[2836]: I1009 07:43:15.652228 2836 topology_manager.go:215] "Topology Admit Handler" podUID="e018aa3382ee84ff49f34558f2cb2be2" podNamespace="kube-system" podName="kube-scheduler-ci-3975-2-2-3-e7db599e29.novalocal" Oct 9 07:43:15.670995 kubelet[2836]: W1009 07:43:15.670976 2836 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 9 07:43:15.671180 kubelet[2836]: E1009 07:43:15.671167 2836 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3975-2-2-3-e7db599e29.novalocal\" already exists" pod="kube-system/kube-scheduler-ci-3975-2-2-3-e7db599e29.novalocal" Oct 9 07:43:15.674781 kubelet[2836]: W1009 07:43:15.674766 2836 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 9 07:43:15.677840 kubelet[2836]: W1009 07:43:15.677825 2836 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 9 07:43:15.830601 kubelet[2836]: I1009 07:43:15.830295 2836 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ec59bd404d93358c4dcde4fde27fb74d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3975-2-2-3-e7db599e29.novalocal\" (UID: \"ec59bd404d93358c4dcde4fde27fb74d\") " pod="kube-system/kube-controller-manager-ci-3975-2-2-3-e7db599e29.novalocal" Oct 9 07:43:15.830601 kubelet[2836]: I1009 07:43:15.830338 2836 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1375f8298899fdb628c3b571d7ae5072-ca-certs\") pod \"kube-apiserver-ci-3975-2-2-3-e7db599e29.novalocal\" (UID: \"1375f8298899fdb628c3b571d7ae5072\") " pod="kube-system/kube-apiserver-ci-3975-2-2-3-e7db599e29.novalocal" Oct 9 07:43:15.830601 kubelet[2836]: I1009 07:43:15.830365 2836 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1375f8298899fdb628c3b571d7ae5072-k8s-certs\") pod \"kube-apiserver-ci-3975-2-2-3-e7db599e29.novalocal\" (UID: \"1375f8298899fdb628c3b571d7ae5072\") " pod="kube-system/kube-apiserver-ci-3975-2-2-3-e7db599e29.novalocal" Oct 9 07:43:15.830601 kubelet[2836]: I1009 07:43:15.830392 2836 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1375f8298899fdb628c3b571d7ae5072-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3975-2-2-3-e7db599e29.novalocal\" (UID: \"1375f8298899fdb628c3b571d7ae5072\") " pod="kube-system/kube-apiserver-ci-3975-2-2-3-e7db599e29.novalocal" Oct 9 07:43:15.830792 kubelet[2836]: I1009 07:43:15.830418 2836 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ec59bd404d93358c4dcde4fde27fb74d-ca-certs\") pod \"kube-controller-manager-ci-3975-2-2-3-e7db599e29.novalocal\" (UID: \"ec59bd404d93358c4dcde4fde27fb74d\") " pod="kube-system/kube-controller-manager-ci-3975-2-2-3-e7db599e29.novalocal" Oct 9 07:43:15.830792 kubelet[2836]: I1009 07:43:15.830441 2836 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ec59bd404d93358c4dcde4fde27fb74d-kubeconfig\") pod \"kube-controller-manager-ci-3975-2-2-3-e7db599e29.novalocal\" (UID: \"ec59bd404d93358c4dcde4fde27fb74d\") " pod="kube-system/kube-controller-manager-ci-3975-2-2-3-e7db599e29.novalocal" Oct 9 07:43:15.830792 kubelet[2836]: I1009 07:43:15.830464 2836 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ec59bd404d93358c4dcde4fde27fb74d-flexvolume-dir\") pod \"kube-controller-manager-ci-3975-2-2-3-e7db599e29.novalocal\" (UID: \"ec59bd404d93358c4dcde4fde27fb74d\") " pod="kube-system/kube-controller-manager-ci-3975-2-2-3-e7db599e29.novalocal" Oct 9 07:43:15.830792 kubelet[2836]: I1009 07:43:15.830487 2836 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ec59bd404d93358c4dcde4fde27fb74d-k8s-certs\") pod \"kube-controller-manager-ci-3975-2-2-3-e7db599e29.novalocal\" (UID: \"ec59bd404d93358c4dcde4fde27fb74d\") " pod="kube-system/kube-controller-manager-ci-3975-2-2-3-e7db599e29.novalocal" Oct 9 07:43:15.830889 kubelet[2836]: I1009 07:43:15.830511 2836 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e018aa3382ee84ff49f34558f2cb2be2-kubeconfig\") pod \"kube-scheduler-ci-3975-2-2-3-e7db599e29.novalocal\" (UID: \"e018aa3382ee84ff49f34558f2cb2be2\") " pod="kube-system/kube-scheduler-ci-3975-2-2-3-e7db599e29.novalocal" Oct 9 07:43:16.492696 kubelet[2836]: I1009 07:43:16.492620 2836 apiserver.go:52] "Watching apiserver" Oct 9 07:43:16.529360 kubelet[2836]: I1009 07:43:16.529165 2836 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Oct 9 07:43:16.590094 kubelet[2836]: W1009 07:43:16.590051 2836 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 9 07:43:16.590390 kubelet[2836]: E1009 07:43:16.590355 2836 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3975-2-2-3-e7db599e29.novalocal\" already exists" pod="kube-system/kube-apiserver-ci-3975-2-2-3-e7db599e29.novalocal" Oct 9 07:43:16.633817 kubelet[2836]: I1009 07:43:16.633779 2836 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3975-2-2-3-e7db599e29.novalocal" podStartSLOduration=1.633734282 podStartE2EDuration="1.633734282s" podCreationTimestamp="2024-10-09 07:43:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 07:43:16.633255539 +0000 UTC m=+1.429348191" watchObservedRunningTime="2024-10-09 07:43:16.633734282 +0000 UTC m=+1.429826934" Oct 9 07:43:16.633990 kubelet[2836]: I1009 07:43:16.633893 2836 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3975-2-2-3-e7db599e29.novalocal" podStartSLOduration=2.633871751 podStartE2EDuration="2.633871751s" podCreationTimestamp="2024-10-09 07:43:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 07:43:16.619549353 +0000 UTC m=+1.415641995" watchObservedRunningTime="2024-10-09 07:43:16.633871751 +0000 UTC m=+1.429964394" Oct 9 07:43:16.660127 kubelet[2836]: I1009 07:43:16.659991 2836 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3975-2-2-3-e7db599e29.novalocal" podStartSLOduration=1.659952109 podStartE2EDuration="1.659952109s" podCreationTimestamp="2024-10-09 07:43:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 07:43:16.64835348 +0000 UTC m=+1.444446132" watchObservedRunningTime="2024-10-09 07:43:16.659952109 +0000 UTC m=+1.456044751" Oct 9 07:43:21.049822 sudo[1862]: pam_unix(sudo:session): session closed for user root Oct 9 07:43:21.337760 sshd[1855]: pam_unix(sshd:session): session closed for user core Oct 9 07:43:21.343229 systemd[1]: sshd@8-172.24.4.70:22-172.24.4.1:39278.service: Deactivated successfully. Oct 9 07:43:21.347991 systemd-logind[1543]: Session 11 logged out. Waiting for processes to exit. Oct 9 07:43:21.349280 systemd[1]: session-11.scope: Deactivated successfully. Oct 9 07:43:21.350730 systemd-logind[1543]: Removed session 11. Oct 9 07:43:28.754036 kubelet[2836]: I1009 07:43:28.753778 2836 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 9 07:43:28.756869 containerd[1575]: time="2024-10-09T07:43:28.756664077Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 9 07:43:28.757713 kubelet[2836]: I1009 07:43:28.757673 2836 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 9 07:43:29.307002 kubelet[2836]: I1009 07:43:29.306600 2836 topology_manager.go:215] "Topology Admit Handler" podUID="fda8142c-6fa9-4cc1-82a7-785a16285b50" podNamespace="kube-system" podName="kube-proxy-6pjgm" Oct 9 07:43:29.326543 kubelet[2836]: I1009 07:43:29.325398 2836 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fda8142c-6fa9-4cc1-82a7-785a16285b50-kube-proxy\") pod \"kube-proxy-6pjgm\" (UID: \"fda8142c-6fa9-4cc1-82a7-785a16285b50\") " pod="kube-system/kube-proxy-6pjgm" Oct 9 07:43:29.328460 kubelet[2836]: I1009 07:43:29.326776 2836 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fda8142c-6fa9-4cc1-82a7-785a16285b50-lib-modules\") pod \"kube-proxy-6pjgm\" (UID: \"fda8142c-6fa9-4cc1-82a7-785a16285b50\") " pod="kube-system/kube-proxy-6pjgm" Oct 9 07:43:29.328460 kubelet[2836]: I1009 07:43:29.326814 2836 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fda8142c-6fa9-4cc1-82a7-785a16285b50-xtables-lock\") pod \"kube-proxy-6pjgm\" (UID: \"fda8142c-6fa9-4cc1-82a7-785a16285b50\") " pod="kube-system/kube-proxy-6pjgm" Oct 9 07:43:29.328460 kubelet[2836]: I1009 07:43:29.326878 2836 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kxdb\" (UniqueName: \"kubernetes.io/projected/fda8142c-6fa9-4cc1-82a7-785a16285b50-kube-api-access-7kxdb\") pod \"kube-proxy-6pjgm\" (UID: \"fda8142c-6fa9-4cc1-82a7-785a16285b50\") " pod="kube-system/kube-proxy-6pjgm" Oct 9 07:43:29.626866 containerd[1575]: time="2024-10-09T07:43:29.626733236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6pjgm,Uid:fda8142c-6fa9-4cc1-82a7-785a16285b50,Namespace:kube-system,Attempt:0,}" Oct 9 07:43:29.852858 containerd[1575]: time="2024-10-09T07:43:29.852654968Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:43:29.852858 containerd[1575]: time="2024-10-09T07:43:29.852719951Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:43:29.852858 containerd[1575]: time="2024-10-09T07:43:29.852812506Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:43:29.853465 containerd[1575]: time="2024-10-09T07:43:29.853351871Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:43:29.902501 systemd[1]: run-containerd-runc-k8s.io-8312012379088f7820f56081e60f567223cee2b1ff5b2ed52c4179ab9aee9db2-runc.rxBhLd.mount: Deactivated successfully. Oct 9 07:43:29.964203 kubelet[2836]: I1009 07:43:29.962791 2836 topology_manager.go:215] "Topology Admit Handler" podUID="b10291a4-0d76-4f01-bb0c-f112f02296c3" podNamespace="tigera-operator" podName="tigera-operator-5d56685c77-qwsl4" Oct 9 07:43:29.971245 containerd[1575]: time="2024-10-09T07:43:29.971200459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6pjgm,Uid:fda8142c-6fa9-4cc1-82a7-785a16285b50,Namespace:kube-system,Attempt:0,} returns sandbox id \"8312012379088f7820f56081e60f567223cee2b1ff5b2ed52c4179ab9aee9db2\"" Oct 9 07:43:29.980712 containerd[1575]: time="2024-10-09T07:43:29.980500608Z" level=info msg="CreateContainer within sandbox \"8312012379088f7820f56081e60f567223cee2b1ff5b2ed52c4179ab9aee9db2\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 9 07:43:30.024457 containerd[1575]: time="2024-10-09T07:43:30.024375932Z" level=info msg="CreateContainer within sandbox \"8312012379088f7820f56081e60f567223cee2b1ff5b2ed52c4179ab9aee9db2\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"54387f0e915603b4f3977c8b45fc8ab42761a828d5a41cf91b5fb1033c005671\"" Oct 9 07:43:30.026751 containerd[1575]: time="2024-10-09T07:43:30.026717251Z" level=info msg="StartContainer for \"54387f0e915603b4f3977c8b45fc8ab42761a828d5a41cf91b5fb1033c005671\"" Oct 9 07:43:30.031367 kubelet[2836]: I1009 07:43:30.031253 2836 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b10291a4-0d76-4f01-bb0c-f112f02296c3-var-lib-calico\") pod \"tigera-operator-5d56685c77-qwsl4\" (UID: \"b10291a4-0d76-4f01-bb0c-f112f02296c3\") " pod="tigera-operator/tigera-operator-5d56685c77-qwsl4" Oct 9 07:43:30.031367 kubelet[2836]: I1009 07:43:30.031309 2836 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgjnm\" (UniqueName: \"kubernetes.io/projected/b10291a4-0d76-4f01-bb0c-f112f02296c3-kube-api-access-zgjnm\") pod \"tigera-operator-5d56685c77-qwsl4\" (UID: \"b10291a4-0d76-4f01-bb0c-f112f02296c3\") " pod="tigera-operator/tigera-operator-5d56685c77-qwsl4" Oct 9 07:43:30.103633 containerd[1575]: time="2024-10-09T07:43:30.103591257Z" level=info msg="StartContainer for \"54387f0e915603b4f3977c8b45fc8ab42761a828d5a41cf91b5fb1033c005671\" returns successfully" Oct 9 07:43:30.279915 containerd[1575]: time="2024-10-09T07:43:30.279249166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-qwsl4,Uid:b10291a4-0d76-4f01-bb0c-f112f02296c3,Namespace:tigera-operator,Attempt:0,}" Oct 9 07:43:30.356279 containerd[1575]: time="2024-10-09T07:43:30.355889241Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:43:30.356279 containerd[1575]: time="2024-10-09T07:43:30.356038312Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:43:30.356279 containerd[1575]: time="2024-10-09T07:43:30.356094959Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:43:30.356279 containerd[1575]: time="2024-10-09T07:43:30.356134163Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:43:30.447795 containerd[1575]: time="2024-10-09T07:43:30.447757358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-qwsl4,Uid:b10291a4-0d76-4f01-bb0c-f112f02296c3,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"4f8d7703746b544e13333620eaff97b75336ae4e9a06a788ba46f7350948b3e5\"" Oct 9 07:43:30.460563 containerd[1575]: time="2024-10-09T07:43:30.460189755Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\"" Oct 9 07:43:31.905008 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2871215803.mount: Deactivated successfully. Oct 9 07:43:32.675370 containerd[1575]: time="2024-10-09T07:43:32.675300504Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:43:32.676831 containerd[1575]: time="2024-10-09T07:43:32.676646588Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.3: active requests=0, bytes read=22136505" Oct 9 07:43:32.678072 containerd[1575]: time="2024-10-09T07:43:32.677813807Z" level=info msg="ImageCreate event name:\"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:43:32.680296 containerd[1575]: time="2024-10-09T07:43:32.680272417Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:43:32.681079 containerd[1575]: time="2024-10-09T07:43:32.681050062Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.3\" with image id \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\", repo tag \"quay.io/tigera/operator:v1.34.3\", repo digest \"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\", size \"22130728\" in 2.220816865s" Oct 9 07:43:32.681204 containerd[1575]: time="2024-10-09T07:43:32.681186088Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\" returns image reference \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\"" Oct 9 07:43:32.687487 containerd[1575]: time="2024-10-09T07:43:32.687231443Z" level=info msg="CreateContainer within sandbox \"4f8d7703746b544e13333620eaff97b75336ae4e9a06a788ba46f7350948b3e5\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 9 07:43:32.709446 containerd[1575]: time="2024-10-09T07:43:32.709403820Z" level=info msg="CreateContainer within sandbox \"4f8d7703746b544e13333620eaff97b75336ae4e9a06a788ba46f7350948b3e5\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"917b8361757296ea3a94ca2a6bedf5805620396c8e4a3d98f7d6fc4624c2800f\"" Oct 9 07:43:32.710640 containerd[1575]: time="2024-10-09T07:43:32.710328913Z" level=info msg="StartContainer for \"917b8361757296ea3a94ca2a6bedf5805620396c8e4a3d98f7d6fc4624c2800f\"" Oct 9 07:43:32.775178 containerd[1575]: time="2024-10-09T07:43:32.775133570Z" level=info msg="StartContainer for \"917b8361757296ea3a94ca2a6bedf5805620396c8e4a3d98f7d6fc4624c2800f\" returns successfully" Oct 9 07:43:33.676475 kubelet[2836]: I1009 07:43:33.676360 2836 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-6pjgm" podStartSLOduration=4.676275506 podStartE2EDuration="4.676275506s" podCreationTimestamp="2024-10-09 07:43:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 07:43:30.629479721 +0000 UTC m=+15.425572403" watchObservedRunningTime="2024-10-09 07:43:33.676275506 +0000 UTC m=+18.472368198" Oct 9 07:43:34.587690 systemd-journald[1128]: Under memory pressure, flushing caches. Oct 9 07:43:34.585337 systemd-resolved[1467]: Under memory pressure, flushing caches. Oct 9 07:43:34.585435 systemd-resolved[1467]: Flushed all caches. Oct 9 07:43:36.541882 kubelet[2836]: I1009 07:43:36.541249 2836 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5d56685c77-qwsl4" podStartSLOduration=5.305549529 podStartE2EDuration="7.541001559s" podCreationTimestamp="2024-10-09 07:43:29 +0000 UTC" firstStartedPulling="2024-10-09 07:43:30.449942705 +0000 UTC m=+15.246035347" lastFinishedPulling="2024-10-09 07:43:32.685394724 +0000 UTC m=+17.481487377" observedRunningTime="2024-10-09 07:43:33.677163297 +0000 UTC m=+18.473255999" watchObservedRunningTime="2024-10-09 07:43:36.541001559 +0000 UTC m=+21.337094221" Oct 9 07:43:36.545738 kubelet[2836]: I1009 07:43:36.545009 2836 topology_manager.go:215] "Topology Admit Handler" podUID="7a45ffa8-2e8a-4758-a399-abee124379ea" podNamespace="calico-system" podName="calico-typha-77f54c7d88-wkzx8" Oct 9 07:43:36.628861 kubelet[2836]: I1009 07:43:36.628232 2836 topology_manager.go:215] "Topology Admit Handler" podUID="f42b4747-40c9-48fd-b832-02514a989896" podNamespace="calico-system" podName="calico-node-hkh7c" Oct 9 07:43:36.636581 systemd-journald[1128]: Under memory pressure, flushing caches. Oct 9 07:43:36.636260 systemd-resolved[1467]: Under memory pressure, flushing caches. Oct 9 07:43:36.636292 systemd-resolved[1467]: Flushed all caches. Oct 9 07:43:36.649565 kubelet[2836]: I1009 07:43:36.649056 2836 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f42b4747-40c9-48fd-b832-02514a989896-cni-bin-dir\") pod \"calico-node-hkh7c\" (UID: \"f42b4747-40c9-48fd-b832-02514a989896\") " pod="calico-system/calico-node-hkh7c" Oct 9 07:43:36.649565 kubelet[2836]: I1009 07:43:36.649147 2836 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6kg7\" (UniqueName: \"kubernetes.io/projected/f42b4747-40c9-48fd-b832-02514a989896-kube-api-access-g6kg7\") pod \"calico-node-hkh7c\" (UID: \"f42b4747-40c9-48fd-b832-02514a989896\") " pod="calico-system/calico-node-hkh7c" Oct 9 07:43:36.649565 kubelet[2836]: I1009 07:43:36.649178 2836 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f42b4747-40c9-48fd-b832-02514a989896-tigera-ca-bundle\") pod \"calico-node-hkh7c\" (UID: \"f42b4747-40c9-48fd-b832-02514a989896\") " pod="calico-system/calico-node-hkh7c" Oct 9 07:43:36.649565 kubelet[2836]: I1009 07:43:36.649202 2836 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f42b4747-40c9-48fd-b832-02514a989896-cni-net-dir\") pod \"calico-node-hkh7c\" (UID: \"f42b4747-40c9-48fd-b832-02514a989896\") " pod="calico-system/calico-node-hkh7c" Oct 9 07:43:36.649565 kubelet[2836]: I1009 07:43:36.649229 2836 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f42b4747-40c9-48fd-b832-02514a989896-cni-log-dir\") pod \"calico-node-hkh7c\" (UID: \"f42b4747-40c9-48fd-b832-02514a989896\") " pod="calico-system/calico-node-hkh7c" Oct 9 07:43:36.649793 kubelet[2836]: I1009 07:43:36.649254 2836 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f42b4747-40c9-48fd-b832-02514a989896-flexvol-driver-host\") pod \"calico-node-hkh7c\" (UID: \"f42b4747-40c9-48fd-b832-02514a989896\") " pod="calico-system/calico-node-hkh7c" Oct 9 07:43:36.649793 kubelet[2836]: I1009 07:43:36.649281 2836 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a45ffa8-2e8a-4758-a399-abee124379ea-tigera-ca-bundle\") pod \"calico-typha-77f54c7d88-wkzx8\" (UID: \"7a45ffa8-2e8a-4758-a399-abee124379ea\") " pod="calico-system/calico-typha-77f54c7d88-wkzx8" Oct 9 07:43:36.649793 kubelet[2836]: I1009 07:43:36.649304 2836 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f42b4747-40c9-48fd-b832-02514a989896-var-run-calico\") pod \"calico-node-hkh7c\" (UID: \"f42b4747-40c9-48fd-b832-02514a989896\") " pod="calico-system/calico-node-hkh7c" Oct 9 07:43:36.649793 kubelet[2836]: I1009 07:43:36.649332 2836 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/7a45ffa8-2e8a-4758-a399-abee124379ea-typha-certs\") pod \"calico-typha-77f54c7d88-wkzx8\" (UID: \"7a45ffa8-2e8a-4758-a399-abee124379ea\") " pod="calico-system/calico-typha-77f54c7d88-wkzx8" Oct 9 07:43:36.649793 kubelet[2836]: I1009 07:43:36.649356 2836 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f42b4747-40c9-48fd-b832-02514a989896-lib-modules\") pod \"calico-node-hkh7c\" (UID: \"f42b4747-40c9-48fd-b832-02514a989896\") " pod="calico-system/calico-node-hkh7c" Oct 9 07:43:36.649924 kubelet[2836]: I1009 07:43:36.649383 2836 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f42b4747-40c9-48fd-b832-02514a989896-xtables-lock\") pod \"calico-node-hkh7c\" (UID: \"f42b4747-40c9-48fd-b832-02514a989896\") " pod="calico-system/calico-node-hkh7c" Oct 9 07:43:36.649924 kubelet[2836]: I1009 07:43:36.649407 2836 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f42b4747-40c9-48fd-b832-02514a989896-var-lib-calico\") pod \"calico-node-hkh7c\" (UID: \"f42b4747-40c9-48fd-b832-02514a989896\") " pod="calico-system/calico-node-hkh7c" Oct 9 07:43:36.649924 kubelet[2836]: I1009 07:43:36.649432 2836 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f42b4747-40c9-48fd-b832-02514a989896-policysync\") pod \"calico-node-hkh7c\" (UID: \"f42b4747-40c9-48fd-b832-02514a989896\") " pod="calico-system/calico-node-hkh7c" Oct 9 07:43:36.649924 kubelet[2836]: I1009 07:43:36.649459 2836 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8n8v9\" (UniqueName: \"kubernetes.io/projected/7a45ffa8-2e8a-4758-a399-abee124379ea-kube-api-access-8n8v9\") pod \"calico-typha-77f54c7d88-wkzx8\" (UID: \"7a45ffa8-2e8a-4758-a399-abee124379ea\") " pod="calico-system/calico-typha-77f54c7d88-wkzx8" Oct 9 07:43:36.649924 kubelet[2836]: I1009 07:43:36.649483 2836 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f42b4747-40c9-48fd-b832-02514a989896-node-certs\") pod \"calico-node-hkh7c\" (UID: \"f42b4747-40c9-48fd-b832-02514a989896\") " pod="calico-system/calico-node-hkh7c" Oct 9 07:43:36.762598 kubelet[2836]: I1009 07:43:36.761091 2836 topology_manager.go:215] "Topology Admit Handler" podUID="27440826-f1e0-45d4-b3d8-3225a63f893c" podNamespace="calico-system" podName="csi-node-driver-qq7rv" Oct 9 07:43:36.762598 kubelet[2836]: E1009 07:43:36.761323 2836 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qq7rv" podUID="27440826-f1e0-45d4-b3d8-3225a63f893c" Oct 9 07:43:36.824261 kubelet[2836]: E1009 07:43:36.822803 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:36.824261 kubelet[2836]: W1009 07:43:36.822881 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:36.824261 kubelet[2836]: E1009 07:43:36.822921 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:36.830334 kubelet[2836]: E1009 07:43:36.827864 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:36.830334 kubelet[2836]: W1009 07:43:36.827885 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:36.830334 kubelet[2836]: E1009 07:43:36.827915 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:36.830334 kubelet[2836]: E1009 07:43:36.830138 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:36.830334 kubelet[2836]: W1009 07:43:36.830154 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:36.830334 kubelet[2836]: E1009 07:43:36.830196 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:36.832150 kubelet[2836]: E1009 07:43:36.830425 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:36.832150 kubelet[2836]: W1009 07:43:36.830436 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:36.832150 kubelet[2836]: E1009 07:43:36.830456 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:36.832504 kubelet[2836]: E1009 07:43:36.832482 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:36.832504 kubelet[2836]: W1009 07:43:36.832499 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:36.832639 kubelet[2836]: E1009 07:43:36.832516 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:36.849063 kubelet[2836]: E1009 07:43:36.849002 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:36.849063 kubelet[2836]: W1009 07:43:36.849028 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:36.849063 kubelet[2836]: E1009 07:43:36.849071 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:36.850449 kubelet[2836]: E1009 07:43:36.849283 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:36.850449 kubelet[2836]: W1009 07:43:36.849292 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:36.850449 kubelet[2836]: E1009 07:43:36.849305 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:36.850625 kubelet[2836]: E1009 07:43:36.850496 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:36.850625 kubelet[2836]: W1009 07:43:36.850510 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:36.850625 kubelet[2836]: E1009 07:43:36.850542 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:36.851636 kubelet[2836]: E1009 07:43:36.850789 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:36.851636 kubelet[2836]: W1009 07:43:36.850799 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:36.851636 kubelet[2836]: E1009 07:43:36.850815 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:36.851734 kubelet[2836]: E1009 07:43:36.851712 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:36.851734 kubelet[2836]: W1009 07:43:36.851723 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:36.851792 kubelet[2836]: E1009 07:43:36.851737 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:36.852905 kubelet[2836]: E1009 07:43:36.852883 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:36.852905 kubelet[2836]: W1009 07:43:36.852901 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:36.852990 kubelet[2836]: E1009 07:43:36.852917 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:36.853722 kubelet[2836]: E1009 07:43:36.853701 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:36.853722 kubelet[2836]: W1009 07:43:36.853717 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:36.853810 kubelet[2836]: E1009 07:43:36.853732 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:36.854687 kubelet[2836]: E1009 07:43:36.854666 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:36.854772 kubelet[2836]: W1009 07:43:36.854702 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:36.854772 kubelet[2836]: E1009 07:43:36.854716 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:36.855049 kubelet[2836]: E1009 07:43:36.854913 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:36.855049 kubelet[2836]: W1009 07:43:36.854922 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:36.855049 kubelet[2836]: E1009 07:43:36.854935 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:36.855742 kubelet[2836]: E1009 07:43:36.855669 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:36.855742 kubelet[2836]: W1009 07:43:36.855683 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:36.855742 kubelet[2836]: E1009 07:43:36.855697 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:36.856701 kubelet[2836]: E1009 07:43:36.856682 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:36.856701 kubelet[2836]: W1009 07:43:36.856697 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:36.856793 kubelet[2836]: E1009 07:43:36.856710 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:36.856919 kubelet[2836]: E1009 07:43:36.856899 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:36.856919 kubelet[2836]: W1009 07:43:36.856911 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:36.856984 kubelet[2836]: E1009 07:43:36.856923 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:36.858823 containerd[1575]: time="2024-10-09T07:43:36.858777756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-77f54c7d88-wkzx8,Uid:7a45ffa8-2e8a-4758-a399-abee124379ea,Namespace:calico-system,Attempt:0,}" Oct 9 07:43:36.859183 kubelet[2836]: E1009 07:43:36.859041 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:36.859183 kubelet[2836]: W1009 07:43:36.859055 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:36.859183 kubelet[2836]: E1009 07:43:36.859074 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:36.861610 kubelet[2836]: E1009 07:43:36.860330 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:36.861610 kubelet[2836]: W1009 07:43:36.861606 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:36.862036 kubelet[2836]: E1009 07:43:36.861632 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:36.862036 kubelet[2836]: E1009 07:43:36.861972 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:36.862036 kubelet[2836]: W1009 07:43:36.861982 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:36.862036 kubelet[2836]: E1009 07:43:36.862019 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:36.863601 kubelet[2836]: E1009 07:43:36.862226 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:36.863601 kubelet[2836]: W1009 07:43:36.862241 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:36.863601 kubelet[2836]: E1009 07:43:36.862254 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:36.863601 kubelet[2836]: E1009 07:43:36.863434 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:36.863601 kubelet[2836]: W1009 07:43:36.863444 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:36.863601 kubelet[2836]: E1009 07:43:36.863458 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:36.863820 kubelet[2836]: E1009 07:43:36.863713 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:36.863820 kubelet[2836]: W1009 07:43:36.863723 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:36.863820 kubelet[2836]: E1009 07:43:36.863737 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:36.864393 kubelet[2836]: E1009 07:43:36.864317 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:36.864393 kubelet[2836]: W1009 07:43:36.864332 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:36.864393 kubelet[2836]: E1009 07:43:36.864346 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:36.865987 kubelet[2836]: E1009 07:43:36.865803 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:36.865987 kubelet[2836]: W1009 07:43:36.865821 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:36.865987 kubelet[2836]: E1009 07:43:36.865866 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:36.869620 kubelet[2836]: E1009 07:43:36.869012 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:36.869620 kubelet[2836]: W1009 07:43:36.869033 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:36.869620 kubelet[2836]: E1009 07:43:36.869080 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:36.869620 kubelet[2836]: I1009 07:43:36.869133 2836 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/27440826-f1e0-45d4-b3d8-3225a63f893c-varrun\") pod \"csi-node-driver-qq7rv\" (UID: \"27440826-f1e0-45d4-b3d8-3225a63f893c\") " pod="calico-system/csi-node-driver-qq7rv" Oct 9 07:43:36.869620 kubelet[2836]: E1009 07:43:36.869551 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:36.869620 kubelet[2836]: W1009 07:43:36.869564 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:36.869620 kubelet[2836]: E1009 07:43:36.869600 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:36.869620 kubelet[2836]: I1009 07:43:36.869625 2836 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/27440826-f1e0-45d4-b3d8-3225a63f893c-kubelet-dir\") pod \"csi-node-driver-qq7rv\" (UID: \"27440826-f1e0-45d4-b3d8-3225a63f893c\") " pod="calico-system/csi-node-driver-qq7rv" Oct 9 07:43:36.872008 kubelet[2836]: E1009 07:43:36.871696 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:36.872008 kubelet[2836]: W1009 07:43:36.871715 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:36.872008 kubelet[2836]: E1009 07:43:36.871781 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:36.872008 kubelet[2836]: I1009 07:43:36.871946 2836 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/27440826-f1e0-45d4-b3d8-3225a63f893c-socket-dir\") pod \"csi-node-driver-qq7rv\" (UID: \"27440826-f1e0-45d4-b3d8-3225a63f893c\") " pod="calico-system/csi-node-driver-qq7rv" Oct 9 07:43:36.873566 kubelet[2836]: E1009 07:43:36.872755 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:36.873566 kubelet[2836]: W1009 07:43:36.872772 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:36.873566 kubelet[2836]: E1009 07:43:36.872793 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:36.874757 kubelet[2836]: E1009 07:43:36.874733 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:36.874757 kubelet[2836]: W1009 07:43:36.874753 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:36.876937 kubelet[2836]: E1009 07:43:36.875267 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:36.876937 kubelet[2836]: W1009 07:43:36.875281 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:36.876937 kubelet[2836]: E1009 07:43:36.875747 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:36.876937 kubelet[2836]: E1009 07:43:36.875799 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:36.876937 kubelet[2836]: E1009 07:43:36.876174 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:36.876937 kubelet[2836]: W1009 07:43:36.876186 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:36.876937 kubelet[2836]: E1009 07:43:36.876288 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:36.876937 kubelet[2836]: I1009 07:43:36.876323 2836 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rlz6\" (UniqueName: \"kubernetes.io/projected/27440826-f1e0-45d4-b3d8-3225a63f893c-kube-api-access-5rlz6\") pod \"csi-node-driver-qq7rv\" (UID: \"27440826-f1e0-45d4-b3d8-3225a63f893c\") " pod="calico-system/csi-node-driver-qq7rv" Oct 9 07:43:36.876937 kubelet[2836]: E1009 07:43:36.876468 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:36.877218 kubelet[2836]: W1009 07:43:36.876477 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:36.877218 kubelet[2836]: E1009 07:43:36.876577 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:36.878509 kubelet[2836]: E1009 07:43:36.878465 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:36.878509 kubelet[2836]: W1009 07:43:36.878483 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:36.878509 kubelet[2836]: E1009 07:43:36.878505 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:36.879910 kubelet[2836]: E1009 07:43:36.879715 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:36.879910 kubelet[2836]: W1009 07:43:36.879726 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:36.879910 kubelet[2836]: E1009 07:43:36.879747 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:36.880037 kubelet[2836]: E1009 07:43:36.879929 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:36.880037 kubelet[2836]: W1009 07:43:36.879937 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:36.880037 kubelet[2836]: E1009 07:43:36.879950 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:36.880673 kubelet[2836]: E1009 07:43:36.880545 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:36.880673 kubelet[2836]: W1009 07:43:36.880559 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:36.880673 kubelet[2836]: E1009 07:43:36.880572 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:36.881693 kubelet[2836]: E1009 07:43:36.881673 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:36.881693 kubelet[2836]: W1009 07:43:36.881690 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:36.881951 kubelet[2836]: E1009 07:43:36.881705 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:36.881951 kubelet[2836]: I1009 07:43:36.881737 2836 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/27440826-f1e0-45d4-b3d8-3225a63f893c-registration-dir\") pod \"csi-node-driver-qq7rv\" (UID: \"27440826-f1e0-45d4-b3d8-3225a63f893c\") " pod="calico-system/csi-node-driver-qq7rv" Oct 9 07:43:36.882056 kubelet[2836]: E1009 07:43:36.882001 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:36.882056 kubelet[2836]: W1009 07:43:36.882012 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:36.882056 kubelet[2836]: E1009 07:43:36.882027 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:36.884917 kubelet[2836]: E1009 07:43:36.884890 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:36.884917 kubelet[2836]: W1009 07:43:36.884911 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:36.885055 kubelet[2836]: E1009 07:43:36.884933 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:36.940509 containerd[1575]: time="2024-10-09T07:43:36.940462963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hkh7c,Uid:f42b4747-40c9-48fd-b832-02514a989896,Namespace:calico-system,Attempt:0,}" Oct 9 07:43:36.951775 containerd[1575]: time="2024-10-09T07:43:36.951632665Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:43:36.952310 containerd[1575]: time="2024-10-09T07:43:36.951757841Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:43:36.952310 containerd[1575]: time="2024-10-09T07:43:36.951970961Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:43:36.952310 containerd[1575]: time="2024-10-09T07:43:36.951992141Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:43:36.983266 kubelet[2836]: E1009 07:43:36.982795 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:36.983266 kubelet[2836]: W1009 07:43:36.982928 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:36.983266 kubelet[2836]: E1009 07:43:36.982952 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:36.984567 kubelet[2836]: E1009 07:43:36.983800 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:36.984567 kubelet[2836]: W1009 07:43:36.983814 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:36.984567 kubelet[2836]: E1009 07:43:36.983829 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:36.984567 kubelet[2836]: E1009 07:43:36.984517 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:36.984781 kubelet[2836]: W1009 07:43:36.984765 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:36.984856 kubelet[2836]: E1009 07:43:36.984846 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:36.985508 kubelet[2836]: E1009 07:43:36.985394 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:36.985508 kubelet[2836]: W1009 07:43:36.985407 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:36.985508 kubelet[2836]: E1009 07:43:36.985422 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:36.986764 kubelet[2836]: E1009 07:43:36.986600 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:36.986764 kubelet[2836]: W1009 07:43:36.986614 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:36.986764 kubelet[2836]: E1009 07:43:36.986630 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:36.988560 kubelet[2836]: E1009 07:43:36.987028 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:36.988560 kubelet[2836]: W1009 07:43:36.987040 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:36.988560 kubelet[2836]: E1009 07:43:36.987054 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:36.988560 kubelet[2836]: E1009 07:43:36.987192 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:36.988560 kubelet[2836]: W1009 07:43:36.987200 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:36.988560 kubelet[2836]: E1009 07:43:36.987212 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:36.988560 kubelet[2836]: E1009 07:43:36.987343 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:36.988560 kubelet[2836]: W1009 07:43:36.987352 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:36.988560 kubelet[2836]: E1009 07:43:36.987364 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:36.988560 kubelet[2836]: E1009 07:43:36.988330 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:36.988930 kubelet[2836]: W1009 07:43:36.988344 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:36.988930 kubelet[2836]: E1009 07:43:36.988364 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:36.989769 kubelet[2836]: E1009 07:43:36.989567 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:36.990340 kubelet[2836]: W1009 07:43:36.990004 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:36.990340 kubelet[2836]: E1009 07:43:36.990028 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:36.991782 kubelet[2836]: E1009 07:43:36.991412 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:36.991782 kubelet[2836]: W1009 07:43:36.991430 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:36.991782 kubelet[2836]: E1009 07:43:36.991451 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:36.993743 kubelet[2836]: E1009 07:43:36.992598 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:36.993743 kubelet[2836]: W1009 07:43:36.992614 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:36.993743 kubelet[2836]: E1009 07:43:36.992631 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:36.994430 kubelet[2836]: E1009 07:43:36.994138 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:36.994430 kubelet[2836]: W1009 07:43:36.994152 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:36.994430 kubelet[2836]: E1009 07:43:36.994169 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:36.995191 kubelet[2836]: E1009 07:43:36.994881 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:36.995191 kubelet[2836]: W1009 07:43:36.994893 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:36.995191 kubelet[2836]: E1009 07:43:36.994907 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:36.996433 kubelet[2836]: E1009 07:43:36.996021 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:36.996433 kubelet[2836]: W1009 07:43:36.996037 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:36.996433 kubelet[2836]: E1009 07:43:36.996055 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:36.998294 kubelet[2836]: E1009 07:43:36.997644 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:36.998294 kubelet[2836]: W1009 07:43:36.997661 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:36.998294 kubelet[2836]: E1009 07:43:36.997680 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:36.999556 kubelet[2836]: E1009 07:43:36.999214 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:36.999556 kubelet[2836]: W1009 07:43:36.999230 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:36.999556 kubelet[2836]: E1009 07:43:36.999247 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:37.000069 kubelet[2836]: E1009 07:43:37.000057 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:37.000363 kubelet[2836]: W1009 07:43:37.000274 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:37.000363 kubelet[2836]: E1009 07:43:37.000296 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:37.002497 kubelet[2836]: E1009 07:43:37.001848 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:37.002497 kubelet[2836]: W1009 07:43:37.001860 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:37.002497 kubelet[2836]: E1009 07:43:37.001875 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:37.003403 kubelet[2836]: E1009 07:43:37.003110 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:37.003403 kubelet[2836]: W1009 07:43:37.003122 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:37.003403 kubelet[2836]: E1009 07:43:37.003136 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:37.006567 kubelet[2836]: E1009 07:43:37.006497 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:37.006567 kubelet[2836]: W1009 07:43:37.006539 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:37.006567 kubelet[2836]: E1009 07:43:37.006565 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:37.009484 kubelet[2836]: E1009 07:43:37.009464 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:37.009484 kubelet[2836]: W1009 07:43:37.009482 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:37.010647 kubelet[2836]: E1009 07:43:37.009510 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:37.010647 kubelet[2836]: E1009 07:43:37.009815 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:37.010647 kubelet[2836]: W1009 07:43:37.009824 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:37.010647 kubelet[2836]: E1009 07:43:37.009868 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:37.012708 kubelet[2836]: E1009 07:43:37.012369 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:37.012708 kubelet[2836]: W1009 07:43:37.012388 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:37.013619 kubelet[2836]: E1009 07:43:37.013596 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:37.016162 kubelet[2836]: E1009 07:43:37.015964 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:37.016162 kubelet[2836]: W1009 07:43:37.015983 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:37.016162 kubelet[2836]: E1009 07:43:37.016008 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:37.037667 containerd[1575]: time="2024-10-09T07:43:37.037210328Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:43:37.037667 containerd[1575]: time="2024-10-09T07:43:37.037608477Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:43:37.039179 containerd[1575]: time="2024-10-09T07:43:37.039078906Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:43:37.039179 containerd[1575]: time="2024-10-09T07:43:37.039119713Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:43:37.081316 containerd[1575]: time="2024-10-09T07:43:37.081197097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-77f54c7d88-wkzx8,Uid:7a45ffa8-2e8a-4758-a399-abee124379ea,Namespace:calico-system,Attempt:0,} returns sandbox id \"0d684bc0ff3edcca09a3c02c2e94c845e3ba65f0ebcb493adf49bdfdbb3e71e2\"" Oct 9 07:43:37.089078 containerd[1575]: time="2024-10-09T07:43:37.088924727Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\"" Oct 9 07:43:37.089937 kubelet[2836]: E1009 07:43:37.089244 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:37.089937 kubelet[2836]: W1009 07:43:37.089283 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:37.089937 kubelet[2836]: E1009 07:43:37.089304 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:37.097170 containerd[1575]: time="2024-10-09T07:43:37.097139675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hkh7c,Uid:f42b4747-40c9-48fd-b832-02514a989896,Namespace:calico-system,Attempt:0,} returns sandbox id \"690530d2e6bb5bd2e8b17fcbaf544f3561fa3ab3e11da8aac02e8376371002d7\"" Oct 9 07:43:37.190058 kubelet[2836]: E1009 07:43:37.190025 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:37.190058 kubelet[2836]: W1009 07:43:37.190045 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:37.190058 kubelet[2836]: E1009 07:43:37.190065 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:37.232697 kubelet[2836]: E1009 07:43:37.232612 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:37.232697 kubelet[2836]: W1009 07:43:37.232632 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:37.232697 kubelet[2836]: E1009 07:43:37.232654 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:38.549201 kubelet[2836]: E1009 07:43:38.549134 2836 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qq7rv" podUID="27440826-f1e0-45d4-b3d8-3225a63f893c" Oct 9 07:43:40.370561 containerd[1575]: time="2024-10-09T07:43:40.369916138Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:43:40.371657 containerd[1575]: time="2024-10-09T07:43:40.371589859Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.1: active requests=0, bytes read=29471335" Oct 9 07:43:40.374386 containerd[1575]: time="2024-10-09T07:43:40.374163545Z" level=info msg="ImageCreate event name:\"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:43:40.377768 containerd[1575]: time="2024-10-09T07:43:40.377742863Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:43:40.379434 containerd[1575]: time="2024-10-09T07:43:40.379215606Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.1\" with image id \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\", size \"30963728\" in 3.290252276s" Oct 9 07:43:40.379434 containerd[1575]: time="2024-10-09T07:43:40.379256563Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\" returns image reference \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\"" Oct 9 07:43:40.380609 containerd[1575]: time="2024-10-09T07:43:40.380577831Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\"" Oct 9 07:43:40.401240 containerd[1575]: time="2024-10-09T07:43:40.401000839Z" level=info msg="CreateContainer within sandbox \"0d684bc0ff3edcca09a3c02c2e94c845e3ba65f0ebcb493adf49bdfdbb3e71e2\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 9 07:43:40.429917 containerd[1575]: time="2024-10-09T07:43:40.429863647Z" level=info msg="CreateContainer within sandbox \"0d684bc0ff3edcca09a3c02c2e94c845e3ba65f0ebcb493adf49bdfdbb3e71e2\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"940a85b317635a44da5455ee2950454cf74f1fb7f419693859186de0b1568594\"" Oct 9 07:43:40.431375 containerd[1575]: time="2024-10-09T07:43:40.431333284Z" level=info msg="StartContainer for \"940a85b317635a44da5455ee2950454cf74f1fb7f419693859186de0b1568594\"" Oct 9 07:43:40.516845 containerd[1575]: time="2024-10-09T07:43:40.516785107Z" level=info msg="StartContainer for \"940a85b317635a44da5455ee2950454cf74f1fb7f419693859186de0b1568594\" returns successfully" Oct 9 07:43:40.549836 kubelet[2836]: E1009 07:43:40.549805 2836 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qq7rv" podUID="27440826-f1e0-45d4-b3d8-3225a63f893c" Oct 9 07:43:40.685843 kubelet[2836]: I1009 07:43:40.685082 2836 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-77f54c7d88-wkzx8" podStartSLOduration=1.39089311 podStartE2EDuration="4.684855701s" podCreationTimestamp="2024-10-09 07:43:36 +0000 UTC" firstStartedPulling="2024-10-09 07:43:37.086024276 +0000 UTC m=+21.882116929" lastFinishedPulling="2024-10-09 07:43:40.379986868 +0000 UTC m=+25.176079520" observedRunningTime="2024-10-09 07:43:40.683211946 +0000 UTC m=+25.479304598" watchObservedRunningTime="2024-10-09 07:43:40.684855701 +0000 UTC m=+25.480948353" Oct 9 07:43:40.694168 kubelet[2836]: E1009 07:43:40.694139 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:40.694242 kubelet[2836]: W1009 07:43:40.694178 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:40.694242 kubelet[2836]: E1009 07:43:40.694198 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:40.694411 kubelet[2836]: E1009 07:43:40.694392 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:40.694411 kubelet[2836]: W1009 07:43:40.694406 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:40.694475 kubelet[2836]: E1009 07:43:40.694436 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:40.694647 kubelet[2836]: E1009 07:43:40.694624 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:40.694647 kubelet[2836]: W1009 07:43:40.694638 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:40.694728 kubelet[2836]: E1009 07:43:40.694650 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:40.694861 kubelet[2836]: E1009 07:43:40.694834 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:40.694861 kubelet[2836]: W1009 07:43:40.694847 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:40.694861 kubelet[2836]: E1009 07:43:40.694859 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:40.695050 kubelet[2836]: E1009 07:43:40.695037 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:40.695050 kubelet[2836]: W1009 07:43:40.695049 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:40.695108 kubelet[2836]: E1009 07:43:40.695060 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:40.695235 kubelet[2836]: E1009 07:43:40.695222 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:40.695235 kubelet[2836]: W1009 07:43:40.695234 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:40.695293 kubelet[2836]: E1009 07:43:40.695245 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:40.695429 kubelet[2836]: E1009 07:43:40.695415 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:40.695429 kubelet[2836]: W1009 07:43:40.695427 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:40.695500 kubelet[2836]: E1009 07:43:40.695439 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:40.695640 kubelet[2836]: E1009 07:43:40.695617 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:40.695640 kubelet[2836]: W1009 07:43:40.695630 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:40.695640 kubelet[2836]: E1009 07:43:40.695641 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:40.695850 kubelet[2836]: E1009 07:43:40.695836 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:40.695850 kubelet[2836]: W1009 07:43:40.695849 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:40.695912 kubelet[2836]: E1009 07:43:40.695861 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:40.696043 kubelet[2836]: E1009 07:43:40.696029 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:40.696043 kubelet[2836]: W1009 07:43:40.696041 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:40.696100 kubelet[2836]: E1009 07:43:40.696053 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:40.696231 kubelet[2836]: E1009 07:43:40.696217 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:40.696231 kubelet[2836]: W1009 07:43:40.696229 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:40.696302 kubelet[2836]: E1009 07:43:40.696241 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:40.696435 kubelet[2836]: E1009 07:43:40.696412 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:40.696435 kubelet[2836]: W1009 07:43:40.696425 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:40.696435 kubelet[2836]: E1009 07:43:40.696436 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:40.696645 kubelet[2836]: E1009 07:43:40.696629 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:40.696645 kubelet[2836]: W1009 07:43:40.696641 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:40.696721 kubelet[2836]: E1009 07:43:40.696653 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:40.696847 kubelet[2836]: E1009 07:43:40.696818 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:40.696847 kubelet[2836]: W1009 07:43:40.696832 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:40.696924 kubelet[2836]: E1009 07:43:40.696844 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:40.697076 kubelet[2836]: E1009 07:43:40.697047 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:40.697076 kubelet[2836]: W1009 07:43:40.697074 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:40.697157 kubelet[2836]: E1009 07:43:40.697088 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:40.722671 kubelet[2836]: E1009 07:43:40.722644 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:40.722671 kubelet[2836]: W1009 07:43:40.722664 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:40.722671 kubelet[2836]: E1009 07:43:40.722684 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:40.723753 kubelet[2836]: E1009 07:43:40.723435 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:40.723753 kubelet[2836]: W1009 07:43:40.723451 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:40.723753 kubelet[2836]: E1009 07:43:40.723481 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:40.723876 kubelet[2836]: E1009 07:43:40.723788 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:40.723876 kubelet[2836]: W1009 07:43:40.723798 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:40.723876 kubelet[2836]: E1009 07:43:40.723827 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:40.724061 kubelet[2836]: E1009 07:43:40.724023 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:40.724061 kubelet[2836]: W1009 07:43:40.724038 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:40.724061 kubelet[2836]: E1009 07:43:40.724062 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:40.724261 kubelet[2836]: E1009 07:43:40.724244 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:40.724261 kubelet[2836]: W1009 07:43:40.724258 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:40.724328 kubelet[2836]: E1009 07:43:40.724276 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:40.724499 kubelet[2836]: E1009 07:43:40.724481 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:40.724499 kubelet[2836]: W1009 07:43:40.724496 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:40.724671 kubelet[2836]: E1009 07:43:40.724579 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:40.725149 kubelet[2836]: E1009 07:43:40.724813 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:40.725149 kubelet[2836]: W1009 07:43:40.724829 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:40.725149 kubelet[2836]: E1009 07:43:40.724924 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:40.725149 kubelet[2836]: E1009 07:43:40.725084 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:40.725149 kubelet[2836]: W1009 07:43:40.725092 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:40.725346 kubelet[2836]: E1009 07:43:40.725177 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:40.725346 kubelet[2836]: E1009 07:43:40.725271 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:40.725346 kubelet[2836]: W1009 07:43:40.725279 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:40.725346 kubelet[2836]: E1009 07:43:40.725306 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:40.726567 kubelet[2836]: E1009 07:43:40.725718 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:40.726567 kubelet[2836]: W1009 07:43:40.725733 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:40.726567 kubelet[2836]: E1009 07:43:40.725748 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:40.726567 kubelet[2836]: E1009 07:43:40.725932 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:40.726567 kubelet[2836]: W1009 07:43:40.725940 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:40.726567 kubelet[2836]: E1009 07:43:40.725964 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:40.726567 kubelet[2836]: E1009 07:43:40.726145 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:40.726567 kubelet[2836]: W1009 07:43:40.726153 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:40.726567 kubelet[2836]: E1009 07:43:40.726178 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:40.726872 kubelet[2836]: E1009 07:43:40.726602 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:40.726872 kubelet[2836]: W1009 07:43:40.726612 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:40.726872 kubelet[2836]: E1009 07:43:40.726628 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:40.726872 kubelet[2836]: E1009 07:43:40.726771 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:40.726872 kubelet[2836]: W1009 07:43:40.726779 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:40.726872 kubelet[2836]: E1009 07:43:40.726808 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:40.727047 kubelet[2836]: E1009 07:43:40.726970 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:40.727047 kubelet[2836]: W1009 07:43:40.726978 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:40.727047 kubelet[2836]: E1009 07:43:40.727003 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:40.727829 kubelet[2836]: E1009 07:43:40.727179 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:40.727829 kubelet[2836]: W1009 07:43:40.727194 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:40.727829 kubelet[2836]: E1009 07:43:40.727217 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:40.727829 kubelet[2836]: E1009 07:43:40.727492 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:40.727829 kubelet[2836]: W1009 07:43:40.727501 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:40.727829 kubelet[2836]: E1009 07:43:40.727516 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:40.727829 kubelet[2836]: E1009 07:43:40.727774 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:40.727829 kubelet[2836]: W1009 07:43:40.727783 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:40.727829 kubelet[2836]: E1009 07:43:40.727795 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:41.670348 kubelet[2836]: I1009 07:43:41.669889 2836 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 9 07:43:41.705915 kubelet[2836]: E1009 07:43:41.705891 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:41.706161 kubelet[2836]: W1009 07:43:41.706049 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:41.706161 kubelet[2836]: E1009 07:43:41.706075 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:41.706318 kubelet[2836]: E1009 07:43:41.706307 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:41.706386 kubelet[2836]: W1009 07:43:41.706375 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:41.706478 kubelet[2836]: E1009 07:43:41.706441 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:41.706830 kubelet[2836]: E1009 07:43:41.706732 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:41.706830 kubelet[2836]: W1009 07:43:41.706743 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:41.706830 kubelet[2836]: E1009 07:43:41.706756 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:41.707105 kubelet[2836]: E1009 07:43:41.707000 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:41.707105 kubelet[2836]: W1009 07:43:41.707011 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:41.707105 kubelet[2836]: E1009 07:43:41.707024 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:41.707274 kubelet[2836]: E1009 07:43:41.707262 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:41.707336 kubelet[2836]: W1009 07:43:41.707326 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:41.707405 kubelet[2836]: E1009 07:43:41.707395 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:41.707746 kubelet[2836]: E1009 07:43:41.707652 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:41.707746 kubelet[2836]: W1009 07:43:41.707663 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:41.707746 kubelet[2836]: E1009 07:43:41.707676 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:41.707918 kubelet[2836]: E1009 07:43:41.707907 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:41.707981 kubelet[2836]: W1009 07:43:41.707971 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:41.708122 kubelet[2836]: E1009 07:43:41.708040 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:41.708231 kubelet[2836]: E1009 07:43:41.708221 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:41.708295 kubelet[2836]: W1009 07:43:41.708284 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:41.708358 kubelet[2836]: E1009 07:43:41.708350 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:41.708666 kubelet[2836]: E1009 07:43:41.708603 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:41.708666 kubelet[2836]: W1009 07:43:41.708614 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:41.708666 kubelet[2836]: E1009 07:43:41.708627 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:41.709051 kubelet[2836]: E1009 07:43:41.708957 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:41.709051 kubelet[2836]: W1009 07:43:41.708968 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:41.709051 kubelet[2836]: E1009 07:43:41.708981 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:41.709319 kubelet[2836]: E1009 07:43:41.709220 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:41.709319 kubelet[2836]: W1009 07:43:41.709231 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:41.709319 kubelet[2836]: E1009 07:43:41.709246 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:41.709489 kubelet[2836]: E1009 07:43:41.709477 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:41.709582 kubelet[2836]: W1009 07:43:41.709571 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:41.709728 kubelet[2836]: E1009 07:43:41.709633 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:41.709855 kubelet[2836]: E1009 07:43:41.709844 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:41.709922 kubelet[2836]: W1009 07:43:41.709912 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:41.709987 kubelet[2836]: E1009 07:43:41.709979 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:41.710298 kubelet[2836]: E1009 07:43:41.710202 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:41.710298 kubelet[2836]: W1009 07:43:41.710213 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:41.710298 kubelet[2836]: E1009 07:43:41.710226 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:41.710475 kubelet[2836]: E1009 07:43:41.710463 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:41.710621 kubelet[2836]: W1009 07:43:41.710560 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:41.710621 kubelet[2836]: E1009 07:43:41.710579 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:41.733455 kubelet[2836]: E1009 07:43:41.733266 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:41.733455 kubelet[2836]: W1009 07:43:41.733298 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:41.733455 kubelet[2836]: E1009 07:43:41.733322 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:41.733926 kubelet[2836]: E1009 07:43:41.733902 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:41.733926 kubelet[2836]: W1009 07:43:41.733914 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:41.733926 kubelet[2836]: E1009 07:43:41.733964 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:41.734407 kubelet[2836]: E1009 07:43:41.734397 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:41.734587 kubelet[2836]: W1009 07:43:41.734478 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:41.734587 kubelet[2836]: E1009 07:43:41.734503 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:41.735007 kubelet[2836]: E1009 07:43:41.734928 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:41.735007 kubelet[2836]: W1009 07:43:41.734940 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:41.735007 kubelet[2836]: E1009 07:43:41.734961 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:41.735443 kubelet[2836]: E1009 07:43:41.735329 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:41.735443 kubelet[2836]: W1009 07:43:41.735365 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:41.735443 kubelet[2836]: E1009 07:43:41.735385 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:41.735907 kubelet[2836]: E1009 07:43:41.735836 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:41.735907 kubelet[2836]: W1009 07:43:41.735847 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:41.736810 kubelet[2836]: E1009 07:43:41.736779 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:41.736896 kubelet[2836]: E1009 07:43:41.736884 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:41.736896 kubelet[2836]: W1009 07:43:41.736892 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:41.737013 kubelet[2836]: E1009 07:43:41.736978 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:41.737712 kubelet[2836]: E1009 07:43:41.737694 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:41.737712 kubelet[2836]: W1009 07:43:41.737713 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:41.737813 kubelet[2836]: E1009 07:43:41.737803 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:41.737918 kubelet[2836]: E1009 07:43:41.737902 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:41.737918 kubelet[2836]: W1009 07:43:41.737916 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:41.737980 kubelet[2836]: E1009 07:43:41.737945 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:41.739446 kubelet[2836]: E1009 07:43:41.739423 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:41.739571 kubelet[2836]: W1009 07:43:41.739448 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:41.739571 kubelet[2836]: E1009 07:43:41.739469 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:41.740805 kubelet[2836]: E1009 07:43:41.740700 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:41.740805 kubelet[2836]: W1009 07:43:41.740722 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:41.740805 kubelet[2836]: E1009 07:43:41.740740 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:41.742063 kubelet[2836]: E1009 07:43:41.741015 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:41.742063 kubelet[2836]: W1009 07:43:41.741024 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:41.742063 kubelet[2836]: E1009 07:43:41.741045 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:41.742063 kubelet[2836]: E1009 07:43:41.741700 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:41.742063 kubelet[2836]: W1009 07:43:41.741710 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:41.742063 kubelet[2836]: E1009 07:43:41.741723 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:41.743297 kubelet[2836]: E1009 07:43:41.743272 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:41.743364 kubelet[2836]: W1009 07:43:41.743346 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:41.743396 kubelet[2836]: E1009 07:43:41.743374 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:41.745203 kubelet[2836]: E1009 07:43:41.743611 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:41.745203 kubelet[2836]: W1009 07:43:41.743626 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:41.745203 kubelet[2836]: E1009 07:43:41.743649 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:41.745203 kubelet[2836]: E1009 07:43:41.743864 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:41.745203 kubelet[2836]: W1009 07:43:41.743874 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:41.745203 kubelet[2836]: E1009 07:43:41.743893 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:41.745203 kubelet[2836]: E1009 07:43:41.744190 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:41.745203 kubelet[2836]: W1009 07:43:41.744199 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:41.745203 kubelet[2836]: E1009 07:43:41.744212 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:41.745203 kubelet[2836]: E1009 07:43:41.744365 2836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:43:41.745753 kubelet[2836]: W1009 07:43:41.744374 2836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:43:41.745753 kubelet[2836]: E1009 07:43:41.744389 2836 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:43:42.165835 containerd[1575]: time="2024-10-09T07:43:42.165765400Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1: active requests=0, bytes read=5141007" Oct 9 07:43:42.166876 containerd[1575]: time="2024-10-09T07:43:42.166324933Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:43:42.168776 containerd[1575]: time="2024-10-09T07:43:42.168741413Z" level=info msg="ImageCreate event name:\"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:43:42.169504 containerd[1575]: time="2024-10-09T07:43:42.169460416Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:43:42.170609 containerd[1575]: time="2024-10-09T07:43:42.170575856Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" with image id \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\", size \"6633368\" in 1.789960675s" Oct 9 07:43:42.170673 containerd[1575]: time="2024-10-09T07:43:42.170611783Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" returns image reference \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\"" Oct 9 07:43:42.174770 containerd[1575]: time="2024-10-09T07:43:42.174610751Z" level=info msg="CreateContainer within sandbox \"690530d2e6bb5bd2e8b17fcbaf544f3561fa3ab3e11da8aac02e8376371002d7\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 9 07:43:42.207002 containerd[1575]: time="2024-10-09T07:43:42.206924391Z" level=info msg="CreateContainer within sandbox \"690530d2e6bb5bd2e8b17fcbaf544f3561fa3ab3e11da8aac02e8376371002d7\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"cebdc0cd270ae1206fbb4460f9be40cc04f29f47af5e12a6fe4f9e9b9b7bcc03\"" Oct 9 07:43:42.209867 containerd[1575]: time="2024-10-09T07:43:42.207644307Z" level=info msg="StartContainer for \"cebdc0cd270ae1206fbb4460f9be40cc04f29f47af5e12a6fe4f9e9b9b7bcc03\"" Oct 9 07:43:42.291861 containerd[1575]: time="2024-10-09T07:43:42.291787176Z" level=info msg="StartContainer for \"cebdc0cd270ae1206fbb4460f9be40cc04f29f47af5e12a6fe4f9e9b9b7bcc03\" returns successfully" Oct 9 07:43:42.332866 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cebdc0cd270ae1206fbb4460f9be40cc04f29f47af5e12a6fe4f9e9b9b7bcc03-rootfs.mount: Deactivated successfully. Oct 9 07:43:42.549682 kubelet[2836]: E1009 07:43:42.549570 2836 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qq7rv" podUID="27440826-f1e0-45d4-b3d8-3225a63f893c" Oct 9 07:43:43.033488 containerd[1575]: time="2024-10-09T07:43:43.033300241Z" level=info msg="shim disconnected" id=cebdc0cd270ae1206fbb4460f9be40cc04f29f47af5e12a6fe4f9e9b9b7bcc03 namespace=k8s.io Oct 9 07:43:43.033488 containerd[1575]: time="2024-10-09T07:43:43.033399458Z" level=warning msg="cleaning up after shim disconnected" id=cebdc0cd270ae1206fbb4460f9be40cc04f29f47af5e12a6fe4f9e9b9b7bcc03 namespace=k8s.io Oct 9 07:43:43.033488 containerd[1575]: time="2024-10-09T07:43:43.033424926Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 07:43:43.066783 containerd[1575]: time="2024-10-09T07:43:43.066678423Z" level=warning msg="cleanup warnings time=\"2024-10-09T07:43:43Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Oct 9 07:43:43.684058 containerd[1575]: time="2024-10-09T07:43:43.683997332Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\"" Oct 9 07:43:44.550020 kubelet[2836]: E1009 07:43:44.549806 2836 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qq7rv" podUID="27440826-f1e0-45d4-b3d8-3225a63f893c" Oct 9 07:43:46.550540 kubelet[2836]: E1009 07:43:46.549730 2836 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qq7rv" podUID="27440826-f1e0-45d4-b3d8-3225a63f893c" Oct 9 07:43:48.548939 kubelet[2836]: E1009 07:43:48.548885 2836 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qq7rv" podUID="27440826-f1e0-45d4-b3d8-3225a63f893c" Oct 9 07:43:50.042642 containerd[1575]: time="2024-10-09T07:43:50.042503684Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:43:50.046060 containerd[1575]: time="2024-10-09T07:43:50.045494964Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.1: active requests=0, bytes read=93083736" Oct 9 07:43:50.047594 containerd[1575]: time="2024-10-09T07:43:50.047353724Z" level=info msg="ImageCreate event name:\"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:43:50.055977 containerd[1575]: time="2024-10-09T07:43:50.055897030Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:43:50.059327 containerd[1575]: time="2024-10-09T07:43:50.058953212Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.1\" with image id \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\", size \"94576137\" in 6.374880027s" Oct 9 07:43:50.059327 containerd[1575]: time="2024-10-09T07:43:50.059019576Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\" returns image reference \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\"" Oct 9 07:43:50.064339 containerd[1575]: time="2024-10-09T07:43:50.064117401Z" level=info msg="CreateContainer within sandbox \"690530d2e6bb5bd2e8b17fcbaf544f3561fa3ab3e11da8aac02e8376371002d7\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 9 07:43:50.145233 containerd[1575]: time="2024-10-09T07:43:50.145180872Z" level=info msg="CreateContainer within sandbox \"690530d2e6bb5bd2e8b17fcbaf544f3561fa3ab3e11da8aac02e8376371002d7\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"3b434bb06d677e3c2be4304023825af56c084c48a5336f2e80a6b87ec5c093e0\"" Oct 9 07:43:50.152583 containerd[1575]: time="2024-10-09T07:43:50.152033131Z" level=info msg="StartContainer for \"3b434bb06d677e3c2be4304023825af56c084c48a5336f2e80a6b87ec5c093e0\"" Oct 9 07:43:50.329400 containerd[1575]: time="2024-10-09T07:43:50.329304658Z" level=info msg="StartContainer for \"3b434bb06d677e3c2be4304023825af56c084c48a5336f2e80a6b87ec5c093e0\" returns successfully" Oct 9 07:43:50.549221 kubelet[2836]: E1009 07:43:50.549154 2836 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qq7rv" podUID="27440826-f1e0-45d4-b3d8-3225a63f893c" Oct 9 07:43:52.340755 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3b434bb06d677e3c2be4304023825af56c084c48a5336f2e80a6b87ec5c093e0-rootfs.mount: Deactivated successfully. Oct 9 07:43:52.345255 containerd[1575]: time="2024-10-09T07:43:52.345099505Z" level=info msg="shim disconnected" id=3b434bb06d677e3c2be4304023825af56c084c48a5336f2e80a6b87ec5c093e0 namespace=k8s.io Oct 9 07:43:52.345255 containerd[1575]: time="2024-10-09T07:43:52.345149008Z" level=warning msg="cleaning up after shim disconnected" id=3b434bb06d677e3c2be4304023825af56c084c48a5336f2e80a6b87ec5c093e0 namespace=k8s.io Oct 9 07:43:52.345255 containerd[1575]: time="2024-10-09T07:43:52.345158826Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 07:43:52.392670 kubelet[2836]: I1009 07:43:52.391824 2836 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Oct 9 07:43:52.431861 kubelet[2836]: I1009 07:43:52.431124 2836 topology_manager.go:215] "Topology Admit Handler" podUID="9094bb4d-511f-44a7-8e94-3160d3cf1a33" podNamespace="kube-system" podName="coredns-76f75df574-wxf26" Oct 9 07:43:52.436672 kubelet[2836]: I1009 07:43:52.434450 2836 topology_manager.go:215] "Topology Admit Handler" podUID="5c1237f3-306f-4b76-aa33-787e19b1ef8a" podNamespace="kube-system" podName="coredns-76f75df574-swjxh" Oct 9 07:43:52.440486 kubelet[2836]: I1009 07:43:52.438629 2836 topology_manager.go:215] "Topology Admit Handler" podUID="02e73bd3-d833-4c42-ada0-6af82baf1ace" podNamespace="calico-system" podName="calico-kube-controllers-5794c8997c-zjcp4" Oct 9 07:43:52.520664 kubelet[2836]: I1009 07:43:52.520636 2836 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c1237f3-306f-4b76-aa33-787e19b1ef8a-config-volume\") pod \"coredns-76f75df574-swjxh\" (UID: \"5c1237f3-306f-4b76-aa33-787e19b1ef8a\") " pod="kube-system/coredns-76f75df574-swjxh" Oct 9 07:43:52.520993 kubelet[2836]: I1009 07:43:52.520980 2836 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjzsj\" (UniqueName: \"kubernetes.io/projected/9094bb4d-511f-44a7-8e94-3160d3cf1a33-kube-api-access-xjzsj\") pod \"coredns-76f75df574-wxf26\" (UID: \"9094bb4d-511f-44a7-8e94-3160d3cf1a33\") " pod="kube-system/coredns-76f75df574-wxf26" Oct 9 07:43:52.521317 kubelet[2836]: I1009 07:43:52.521117 2836 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dnm2\" (UniqueName: \"kubernetes.io/projected/02e73bd3-d833-4c42-ada0-6af82baf1ace-kube-api-access-2dnm2\") pod \"calico-kube-controllers-5794c8997c-zjcp4\" (UID: \"02e73bd3-d833-4c42-ada0-6af82baf1ace\") " pod="calico-system/calico-kube-controllers-5794c8997c-zjcp4" Oct 9 07:43:52.521317 kubelet[2836]: I1009 07:43:52.521189 2836 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nczbl\" (UniqueName: \"kubernetes.io/projected/5c1237f3-306f-4b76-aa33-787e19b1ef8a-kube-api-access-nczbl\") pod \"coredns-76f75df574-swjxh\" (UID: \"5c1237f3-306f-4b76-aa33-787e19b1ef8a\") " pod="kube-system/coredns-76f75df574-swjxh" Oct 9 07:43:52.521317 kubelet[2836]: I1009 07:43:52.521220 2836 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/02e73bd3-d833-4c42-ada0-6af82baf1ace-tigera-ca-bundle\") pod \"calico-kube-controllers-5794c8997c-zjcp4\" (UID: \"02e73bd3-d833-4c42-ada0-6af82baf1ace\") " pod="calico-system/calico-kube-controllers-5794c8997c-zjcp4" Oct 9 07:43:52.521317 kubelet[2836]: I1009 07:43:52.521260 2836 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9094bb4d-511f-44a7-8e94-3160d3cf1a33-config-volume\") pod \"coredns-76f75df574-wxf26\" (UID: \"9094bb4d-511f-44a7-8e94-3160d3cf1a33\") " pod="kube-system/coredns-76f75df574-wxf26" Oct 9 07:43:52.553931 containerd[1575]: time="2024-10-09T07:43:52.553739305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qq7rv,Uid:27440826-f1e0-45d4-b3d8-3225a63f893c,Namespace:calico-system,Attempt:0,}" Oct 9 07:43:52.716105 containerd[1575]: time="2024-10-09T07:43:52.715612254Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\"" Oct 9 07:43:52.746488 containerd[1575]: time="2024-10-09T07:43:52.746436219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-swjxh,Uid:5c1237f3-306f-4b76-aa33-787e19b1ef8a,Namespace:kube-system,Attempt:0,}" Oct 9 07:43:52.749973 containerd[1575]: time="2024-10-09T07:43:52.749923129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-wxf26,Uid:9094bb4d-511f-44a7-8e94-3160d3cf1a33,Namespace:kube-system,Attempt:0,}" Oct 9 07:43:52.753494 containerd[1575]: time="2024-10-09T07:43:52.753302859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5794c8997c-zjcp4,Uid:02e73bd3-d833-4c42-ada0-6af82baf1ace,Namespace:calico-system,Attempt:0,}" Oct 9 07:43:52.970471 containerd[1575]: time="2024-10-09T07:43:52.970279040Z" level=error msg="Failed to destroy network for sandbox \"18b3e3f3469aa95a5537c35318081ddbb84ac466da19ad19c3d15adf1062c69b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:43:52.980794 containerd[1575]: time="2024-10-09T07:43:52.980737156Z" level=error msg="encountered an error cleaning up failed sandbox \"18b3e3f3469aa95a5537c35318081ddbb84ac466da19ad19c3d15adf1062c69b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:43:52.982472 containerd[1575]: time="2024-10-09T07:43:52.982077927Z" level=error msg="Failed to destroy network for sandbox \"7fa5396b2d335ed64fc25b8f185d8f59fcbfb2aa16b20fc01aa95073b2aa8a05\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:43:52.982472 containerd[1575]: time="2024-10-09T07:43:52.982401253Z" level=error msg="encountered an error cleaning up failed sandbox \"7fa5396b2d335ed64fc25b8f185d8f59fcbfb2aa16b20fc01aa95073b2aa8a05\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:43:53.015037 containerd[1575]: time="2024-10-09T07:43:53.014682450Z" level=error msg="Failed to destroy network for sandbox \"55a9fbbdf6f49909f0a562ec22933ab6c70f25fcf9d71fb9322f315c8c117886\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:43:53.015190 containerd[1575]: time="2024-10-09T07:43:53.015063855Z" level=error msg="encountered an error cleaning up failed sandbox \"55a9fbbdf6f49909f0a562ec22933ab6c70f25fcf9d71fb9322f315c8c117886\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:43:53.015190 containerd[1575]: time="2024-10-09T07:43:53.015128476Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qq7rv,Uid:27440826-f1e0-45d4-b3d8-3225a63f893c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"55a9fbbdf6f49909f0a562ec22933ab6c70f25fcf9d71fb9322f315c8c117886\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:43:53.015334 containerd[1575]: time="2024-10-09T07:43:53.015233332Z" level=error msg="Failed to destroy network for sandbox \"84624294e1c71732ae60bf3020ea314b3599d3be22a97eafb851758ab98dea88\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:43:53.015882 containerd[1575]: time="2024-10-09T07:43:53.015463112Z" level=error msg="encountered an error cleaning up failed sandbox \"84624294e1c71732ae60bf3020ea314b3599d3be22a97eafb851758ab98dea88\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:43:53.015882 containerd[1575]: time="2024-10-09T07:43:53.015508047Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-wxf26,Uid:9094bb4d-511f-44a7-8e94-3160d3cf1a33,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"84624294e1c71732ae60bf3020ea314b3599d3be22a97eafb851758ab98dea88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:43:53.015882 containerd[1575]: time="2024-10-09T07:43:53.015597054Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-swjxh,Uid:5c1237f3-306f-4b76-aa33-787e19b1ef8a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"18b3e3f3469aa95a5537c35318081ddbb84ac466da19ad19c3d15adf1062c69b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:43:53.015882 containerd[1575]: time="2024-10-09T07:43:53.015658148Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5794c8997c-zjcp4,Uid:02e73bd3-d833-4c42-ada0-6af82baf1ace,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7fa5396b2d335ed64fc25b8f185d8f59fcbfb2aa16b20fc01aa95073b2aa8a05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:43:53.016453 kubelet[2836]: E1009 07:43:53.015759 2836 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55a9fbbdf6f49909f0a562ec22933ab6c70f25fcf9d71fb9322f315c8c117886\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:43:53.016453 kubelet[2836]: E1009 07:43:53.016206 2836 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55a9fbbdf6f49909f0a562ec22933ab6c70f25fcf9d71fb9322f315c8c117886\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qq7rv" Oct 9 07:43:53.016453 kubelet[2836]: E1009 07:43:53.016343 2836 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18b3e3f3469aa95a5537c35318081ddbb84ac466da19ad19c3d15adf1062c69b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:43:53.016723 kubelet[2836]: E1009 07:43:53.016597 2836 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55a9fbbdf6f49909f0a562ec22933ab6c70f25fcf9d71fb9322f315c8c117886\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qq7rv" Oct 9 07:43:53.016939 kubelet[2836]: E1009 07:43:53.016792 2836 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18b3e3f3469aa95a5537c35318081ddbb84ac466da19ad19c3d15adf1062c69b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-swjxh" Oct 9 07:43:53.016939 kubelet[2836]: E1009 07:43:53.016821 2836 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18b3e3f3469aa95a5537c35318081ddbb84ac466da19ad19c3d15adf1062c69b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-swjxh" Oct 9 07:43:53.016939 kubelet[2836]: E1009 07:43:53.016902 2836 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-qq7rv_calico-system(27440826-f1e0-45d4-b3d8-3225a63f893c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-qq7rv_calico-system(27440826-f1e0-45d4-b3d8-3225a63f893c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"55a9fbbdf6f49909f0a562ec22933ab6c70f25fcf9d71fb9322f315c8c117886\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qq7rv" podUID="27440826-f1e0-45d4-b3d8-3225a63f893c" Oct 9 07:43:53.018026 kubelet[2836]: E1009 07:43:53.017158 2836 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-swjxh_kube-system(5c1237f3-306f-4b76-aa33-787e19b1ef8a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-swjxh_kube-system(5c1237f3-306f-4b76-aa33-787e19b1ef8a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"18b3e3f3469aa95a5537c35318081ddbb84ac466da19ad19c3d15adf1062c69b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-swjxh" podUID="5c1237f3-306f-4b76-aa33-787e19b1ef8a" Oct 9 07:43:53.018026 kubelet[2836]: E1009 07:43:53.017708 2836 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fa5396b2d335ed64fc25b8f185d8f59fcbfb2aa16b20fc01aa95073b2aa8a05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:43:53.018026 kubelet[2836]: E1009 07:43:53.017768 2836 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fa5396b2d335ed64fc25b8f185d8f59fcbfb2aa16b20fc01aa95073b2aa8a05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5794c8997c-zjcp4" Oct 9 07:43:53.018216 kubelet[2836]: E1009 07:43:53.017796 2836 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fa5396b2d335ed64fc25b8f185d8f59fcbfb2aa16b20fc01aa95073b2aa8a05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5794c8997c-zjcp4" Oct 9 07:43:53.018216 kubelet[2836]: E1009 07:43:53.017856 2836 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5794c8997c-zjcp4_calico-system(02e73bd3-d833-4c42-ada0-6af82baf1ace)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5794c8997c-zjcp4_calico-system(02e73bd3-d833-4c42-ada0-6af82baf1ace)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7fa5396b2d335ed64fc25b8f185d8f59fcbfb2aa16b20fc01aa95073b2aa8a05\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5794c8997c-zjcp4" podUID="02e73bd3-d833-4c42-ada0-6af82baf1ace" Oct 9 07:43:53.018216 kubelet[2836]: E1009 07:43:53.017896 2836 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84624294e1c71732ae60bf3020ea314b3599d3be22a97eafb851758ab98dea88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:43:53.018331 kubelet[2836]: E1009 07:43:53.017920 2836 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84624294e1c71732ae60bf3020ea314b3599d3be22a97eafb851758ab98dea88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-wxf26" Oct 9 07:43:53.018331 kubelet[2836]: E1009 07:43:53.017941 2836 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84624294e1c71732ae60bf3020ea314b3599d3be22a97eafb851758ab98dea88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-wxf26" Oct 9 07:43:53.018331 kubelet[2836]: E1009 07:43:53.017978 2836 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-wxf26_kube-system(9094bb4d-511f-44a7-8e94-3160d3cf1a33)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-wxf26_kube-system(9094bb4d-511f-44a7-8e94-3160d3cf1a33)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"84624294e1c71732ae60bf3020ea314b3599d3be22a97eafb851758ab98dea88\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-wxf26" podUID="9094bb4d-511f-44a7-8e94-3160d3cf1a33" Oct 9 07:43:53.354134 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-55a9fbbdf6f49909f0a562ec22933ab6c70f25fcf9d71fb9322f315c8c117886-shm.mount: Deactivated successfully. Oct 9 07:43:53.717688 kubelet[2836]: I1009 07:43:53.717475 2836 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7fa5396b2d335ed64fc25b8f185d8f59fcbfb2aa16b20fc01aa95073b2aa8a05" Oct 9 07:43:53.723418 kubelet[2836]: I1009 07:43:53.721588 2836 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18b3e3f3469aa95a5537c35318081ddbb84ac466da19ad19c3d15adf1062c69b" Oct 9 07:43:53.730583 containerd[1575]: time="2024-10-09T07:43:53.730448892Z" level=info msg="StopPodSandbox for \"7fa5396b2d335ed64fc25b8f185d8f59fcbfb2aa16b20fc01aa95073b2aa8a05\"" Oct 9 07:43:53.735769 containerd[1575]: time="2024-10-09T07:43:53.734872096Z" level=info msg="StopPodSandbox for \"18b3e3f3469aa95a5537c35318081ddbb84ac466da19ad19c3d15adf1062c69b\"" Oct 9 07:43:53.744202 containerd[1575]: time="2024-10-09T07:43:53.741724811Z" level=info msg="Ensure that sandbox 7fa5396b2d335ed64fc25b8f185d8f59fcbfb2aa16b20fc01aa95073b2aa8a05 in task-service has been cleanup successfully" Oct 9 07:43:53.747360 kubelet[2836]: I1009 07:43:53.747323 2836 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="84624294e1c71732ae60bf3020ea314b3599d3be22a97eafb851758ab98dea88" Oct 9 07:43:53.750415 containerd[1575]: time="2024-10-09T07:43:53.750043242Z" level=info msg="StopPodSandbox for \"84624294e1c71732ae60bf3020ea314b3599d3be22a97eafb851758ab98dea88\"" Oct 9 07:43:53.750650 containerd[1575]: time="2024-10-09T07:43:53.750448711Z" level=info msg="Ensure that sandbox 84624294e1c71732ae60bf3020ea314b3599d3be22a97eafb851758ab98dea88 in task-service has been cleanup successfully" Oct 9 07:43:53.751168 containerd[1575]: time="2024-10-09T07:43:53.751111794Z" level=info msg="Ensure that sandbox 18b3e3f3469aa95a5537c35318081ddbb84ac466da19ad19c3d15adf1062c69b in task-service has been cleanup successfully" Oct 9 07:43:53.754624 kubelet[2836]: I1009 07:43:53.754590 2836 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55a9fbbdf6f49909f0a562ec22933ab6c70f25fcf9d71fb9322f315c8c117886" Oct 9 07:43:53.763707 containerd[1575]: time="2024-10-09T07:43:53.763616695Z" level=info msg="StopPodSandbox for \"55a9fbbdf6f49909f0a562ec22933ab6c70f25fcf9d71fb9322f315c8c117886\"" Oct 9 07:43:53.764903 containerd[1575]: time="2024-10-09T07:43:53.764789141Z" level=info msg="Ensure that sandbox 55a9fbbdf6f49909f0a562ec22933ab6c70f25fcf9d71fb9322f315c8c117886 in task-service has been cleanup successfully" Oct 9 07:43:53.834649 containerd[1575]: time="2024-10-09T07:43:53.834570641Z" level=error msg="StopPodSandbox for \"7fa5396b2d335ed64fc25b8f185d8f59fcbfb2aa16b20fc01aa95073b2aa8a05\" failed" error="failed to destroy network for sandbox \"7fa5396b2d335ed64fc25b8f185d8f59fcbfb2aa16b20fc01aa95073b2aa8a05\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:43:53.834945 kubelet[2836]: E1009 07:43:53.834923 2836 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7fa5396b2d335ed64fc25b8f185d8f59fcbfb2aa16b20fc01aa95073b2aa8a05\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7fa5396b2d335ed64fc25b8f185d8f59fcbfb2aa16b20fc01aa95073b2aa8a05" Oct 9 07:43:53.835773 kubelet[2836]: E1009 07:43:53.835743 2836 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7fa5396b2d335ed64fc25b8f185d8f59fcbfb2aa16b20fc01aa95073b2aa8a05"} Oct 9 07:43:53.835834 kubelet[2836]: E1009 07:43:53.835800 2836 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"02e73bd3-d833-4c42-ada0-6af82baf1ace\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7fa5396b2d335ed64fc25b8f185d8f59fcbfb2aa16b20fc01aa95073b2aa8a05\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 07:43:53.835908 kubelet[2836]: E1009 07:43:53.835835 2836 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"02e73bd3-d833-4c42-ada0-6af82baf1ace\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7fa5396b2d335ed64fc25b8f185d8f59fcbfb2aa16b20fc01aa95073b2aa8a05\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5794c8997c-zjcp4" podUID="02e73bd3-d833-4c42-ada0-6af82baf1ace" Oct 9 07:43:53.840653 containerd[1575]: time="2024-10-09T07:43:53.840599663Z" level=error msg="StopPodSandbox for \"18b3e3f3469aa95a5537c35318081ddbb84ac466da19ad19c3d15adf1062c69b\" failed" error="failed to destroy network for sandbox \"18b3e3f3469aa95a5537c35318081ddbb84ac466da19ad19c3d15adf1062c69b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:43:53.840838 kubelet[2836]: E1009 07:43:53.840812 2836 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"18b3e3f3469aa95a5537c35318081ddbb84ac466da19ad19c3d15adf1062c69b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="18b3e3f3469aa95a5537c35318081ddbb84ac466da19ad19c3d15adf1062c69b" Oct 9 07:43:53.840906 kubelet[2836]: E1009 07:43:53.840862 2836 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"18b3e3f3469aa95a5537c35318081ddbb84ac466da19ad19c3d15adf1062c69b"} Oct 9 07:43:53.840906 kubelet[2836]: E1009 07:43:53.840904 2836 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5c1237f3-306f-4b76-aa33-787e19b1ef8a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"18b3e3f3469aa95a5537c35318081ddbb84ac466da19ad19c3d15adf1062c69b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 07:43:53.841028 kubelet[2836]: E1009 07:43:53.840940 2836 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5c1237f3-306f-4b76-aa33-787e19b1ef8a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"18b3e3f3469aa95a5537c35318081ddbb84ac466da19ad19c3d15adf1062c69b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-swjxh" podUID="5c1237f3-306f-4b76-aa33-787e19b1ef8a" Oct 9 07:43:53.846662 containerd[1575]: time="2024-10-09T07:43:53.846606564Z" level=error msg="StopPodSandbox for \"84624294e1c71732ae60bf3020ea314b3599d3be22a97eafb851758ab98dea88\" failed" error="failed to destroy network for sandbox \"84624294e1c71732ae60bf3020ea314b3599d3be22a97eafb851758ab98dea88\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:43:53.846920 kubelet[2836]: E1009 07:43:53.846892 2836 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"84624294e1c71732ae60bf3020ea314b3599d3be22a97eafb851758ab98dea88\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="84624294e1c71732ae60bf3020ea314b3599d3be22a97eafb851758ab98dea88" Oct 9 07:43:53.846981 kubelet[2836]: E1009 07:43:53.846936 2836 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"84624294e1c71732ae60bf3020ea314b3599d3be22a97eafb851758ab98dea88"} Oct 9 07:43:53.847031 kubelet[2836]: E1009 07:43:53.847009 2836 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9094bb4d-511f-44a7-8e94-3160d3cf1a33\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"84624294e1c71732ae60bf3020ea314b3599d3be22a97eafb851758ab98dea88\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 07:43:53.847112 kubelet[2836]: E1009 07:43:53.847052 2836 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9094bb4d-511f-44a7-8e94-3160d3cf1a33\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"84624294e1c71732ae60bf3020ea314b3599d3be22a97eafb851758ab98dea88\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-wxf26" podUID="9094bb4d-511f-44a7-8e94-3160d3cf1a33" Oct 9 07:43:53.850407 containerd[1575]: time="2024-10-09T07:43:53.850332212Z" level=error msg="StopPodSandbox for \"55a9fbbdf6f49909f0a562ec22933ab6c70f25fcf9d71fb9322f315c8c117886\" failed" error="failed to destroy network for sandbox \"55a9fbbdf6f49909f0a562ec22933ab6c70f25fcf9d71fb9322f315c8c117886\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:43:53.850668 kubelet[2836]: E1009 07:43:53.850642 2836 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"55a9fbbdf6f49909f0a562ec22933ab6c70f25fcf9d71fb9322f315c8c117886\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="55a9fbbdf6f49909f0a562ec22933ab6c70f25fcf9d71fb9322f315c8c117886" Oct 9 07:43:53.850716 kubelet[2836]: E1009 07:43:53.850682 2836 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"55a9fbbdf6f49909f0a562ec22933ab6c70f25fcf9d71fb9322f315c8c117886"} Oct 9 07:43:53.851371 kubelet[2836]: E1009 07:43:53.850740 2836 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"27440826-f1e0-45d4-b3d8-3225a63f893c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"55a9fbbdf6f49909f0a562ec22933ab6c70f25fcf9d71fb9322f315c8c117886\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 07:43:53.851371 kubelet[2836]: E1009 07:43:53.850792 2836 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"27440826-f1e0-45d4-b3d8-3225a63f893c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"55a9fbbdf6f49909f0a562ec22933ab6c70f25fcf9d71fb9322f315c8c117886\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qq7rv" podUID="27440826-f1e0-45d4-b3d8-3225a63f893c" Oct 9 07:44:00.477905 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1414690650.mount: Deactivated successfully. Oct 9 07:44:00.726856 containerd[1575]: time="2024-10-09T07:44:00.694349481Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.1: active requests=0, bytes read=117873564" Oct 9 07:44:00.726856 containerd[1575]: time="2024-10-09T07:44:00.690021740Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:44:00.730441 containerd[1575]: time="2024-10-09T07:44:00.730277578Z" level=info msg="ImageCreate event name:\"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:44:00.736669 containerd[1575]: time="2024-10-09T07:44:00.736514148Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:44:00.739581 containerd[1575]: time="2024-10-09T07:44:00.738400144Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.1\" with image id \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\", size \"117873426\" in 8.022733539s" Oct 9 07:44:00.739581 containerd[1575]: time="2024-10-09T07:44:00.738479482Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\" returns image reference \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\"" Oct 9 07:44:00.837092 containerd[1575]: time="2024-10-09T07:44:00.837056881Z" level=info msg="CreateContainer within sandbox \"690530d2e6bb5bd2e8b17fcbaf544f3561fa3ab3e11da8aac02e8376371002d7\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 9 07:44:00.878095 containerd[1575]: time="2024-10-09T07:44:00.878038059Z" level=info msg="CreateContainer within sandbox \"690530d2e6bb5bd2e8b17fcbaf544f3561fa3ab3e11da8aac02e8376371002d7\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"4c775ad94bb4d58b964a6ae3690236a31be0c5ffd90c4931b082f50ca34bc98e\"" Oct 9 07:44:00.879649 containerd[1575]: time="2024-10-09T07:44:00.878610242Z" level=info msg="StartContainer for \"4c775ad94bb4d58b964a6ae3690236a31be0c5ffd90c4931b082f50ca34bc98e\"" Oct 9 07:44:00.969424 containerd[1575]: time="2024-10-09T07:44:00.969253007Z" level=info msg="StartContainer for \"4c775ad94bb4d58b964a6ae3690236a31be0c5ffd90c4931b082f50ca34bc98e\" returns successfully" Oct 9 07:44:01.058862 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 9 07:44:01.059006 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 9 07:44:01.975669 kubelet[2836]: I1009 07:44:01.975416 2836 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-hkh7c" podStartSLOduration=2.334612026 podStartE2EDuration="25.975366728s" podCreationTimestamp="2024-10-09 07:43:36 +0000 UTC" firstStartedPulling="2024-10-09 07:43:37.098181656 +0000 UTC m=+21.894274298" lastFinishedPulling="2024-10-09 07:44:00.738936298 +0000 UTC m=+45.535029000" observedRunningTime="2024-10-09 07:44:01.965396809 +0000 UTC m=+46.761489451" watchObservedRunningTime="2024-10-09 07:44:01.975366728 +0000 UTC m=+46.771459380" Oct 9 07:44:02.024898 systemd[1]: run-containerd-runc-k8s.io-4c775ad94bb4d58b964a6ae3690236a31be0c5ffd90c4931b082f50ca34bc98e-runc.Wb0lsf.mount: Deactivated successfully. Oct 9 07:44:02.619869 systemd-journald[1128]: Under memory pressure, flushing caches. Oct 9 07:44:02.616833 systemd-resolved[1467]: Under memory pressure, flushing caches. Oct 9 07:44:02.616913 systemd-resolved[1467]: Flushed all caches. Oct 9 07:44:04.665489 systemd-resolved[1467]: Under memory pressure, flushing caches. Oct 9 07:44:04.665504 systemd-resolved[1467]: Flushed all caches. Oct 9 07:44:04.666549 systemd-journald[1128]: Under memory pressure, flushing caches. Oct 9 07:44:04.870617 kubelet[2836]: I1009 07:44:04.870502 2836 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 9 07:44:05.565835 containerd[1575]: time="2024-10-09T07:44:05.565298417Z" level=info msg="StopPodSandbox for \"7fa5396b2d335ed64fc25b8f185d8f59fcbfb2aa16b20fc01aa95073b2aa8a05\"" Oct 9 07:44:05.684550 kernel: bpftool[4113]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Oct 9 07:44:06.055402 systemd-networkd[1211]: vxlan.calico: Link UP Oct 9 07:44:06.055412 systemd-networkd[1211]: vxlan.calico: Gained carrier Oct 9 07:44:06.225350 containerd[1575]: 2024-10-09 07:44:05.705 [INFO][4099] k8s.go 608: Cleaning up netns ContainerID="7fa5396b2d335ed64fc25b8f185d8f59fcbfb2aa16b20fc01aa95073b2aa8a05" Oct 9 07:44:06.225350 containerd[1575]: 2024-10-09 07:44:05.705 [INFO][4099] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="7fa5396b2d335ed64fc25b8f185d8f59fcbfb2aa16b20fc01aa95073b2aa8a05" iface="eth0" netns="/var/run/netns/cni-954890d8-6ec5-89bd-2ec8-24087adea21d" Oct 9 07:44:06.225350 containerd[1575]: 2024-10-09 07:44:05.706 [INFO][4099] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="7fa5396b2d335ed64fc25b8f185d8f59fcbfb2aa16b20fc01aa95073b2aa8a05" iface="eth0" netns="/var/run/netns/cni-954890d8-6ec5-89bd-2ec8-24087adea21d" Oct 9 07:44:06.225350 containerd[1575]: 2024-10-09 07:44:05.706 [INFO][4099] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="7fa5396b2d335ed64fc25b8f185d8f59fcbfb2aa16b20fc01aa95073b2aa8a05" iface="eth0" netns="/var/run/netns/cni-954890d8-6ec5-89bd-2ec8-24087adea21d" Oct 9 07:44:06.225350 containerd[1575]: 2024-10-09 07:44:05.707 [INFO][4099] k8s.go 615: Releasing IP address(es) ContainerID="7fa5396b2d335ed64fc25b8f185d8f59fcbfb2aa16b20fc01aa95073b2aa8a05" Oct 9 07:44:06.225350 containerd[1575]: 2024-10-09 07:44:05.707 [INFO][4099] utils.go 188: Calico CNI releasing IP address ContainerID="7fa5396b2d335ed64fc25b8f185d8f59fcbfb2aa16b20fc01aa95073b2aa8a05" Oct 9 07:44:06.225350 containerd[1575]: 2024-10-09 07:44:06.199 [INFO][4114] ipam_plugin.go 417: Releasing address using handleID ContainerID="7fa5396b2d335ed64fc25b8f185d8f59fcbfb2aa16b20fc01aa95073b2aa8a05" HandleID="k8s-pod-network.7fa5396b2d335ed64fc25b8f185d8f59fcbfb2aa16b20fc01aa95073b2aa8a05" Workload="ci--3975--2--2--3--e7db599e29.novalocal-k8s-calico--kube--controllers--5794c8997c--zjcp4-eth0" Oct 9 07:44:06.225350 containerd[1575]: 2024-10-09 07:44:06.201 [INFO][4114] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:44:06.225350 containerd[1575]: 2024-10-09 07:44:06.202 [INFO][4114] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:44:06.225350 containerd[1575]: 2024-10-09 07:44:06.216 [WARNING][4114] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="7fa5396b2d335ed64fc25b8f185d8f59fcbfb2aa16b20fc01aa95073b2aa8a05" HandleID="k8s-pod-network.7fa5396b2d335ed64fc25b8f185d8f59fcbfb2aa16b20fc01aa95073b2aa8a05" Workload="ci--3975--2--2--3--e7db599e29.novalocal-k8s-calico--kube--controllers--5794c8997c--zjcp4-eth0" Oct 9 07:44:06.225350 containerd[1575]: 2024-10-09 07:44:06.216 [INFO][4114] ipam_plugin.go 445: Releasing address using workloadID ContainerID="7fa5396b2d335ed64fc25b8f185d8f59fcbfb2aa16b20fc01aa95073b2aa8a05" HandleID="k8s-pod-network.7fa5396b2d335ed64fc25b8f185d8f59fcbfb2aa16b20fc01aa95073b2aa8a05" Workload="ci--3975--2--2--3--e7db599e29.novalocal-k8s-calico--kube--controllers--5794c8997c--zjcp4-eth0" Oct 9 07:44:06.225350 containerd[1575]: 2024-10-09 07:44:06.221 [INFO][4114] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:44:06.225350 containerd[1575]: 2024-10-09 07:44:06.223 [INFO][4099] k8s.go 621: Teardown processing complete. ContainerID="7fa5396b2d335ed64fc25b8f185d8f59fcbfb2aa16b20fc01aa95073b2aa8a05" Oct 9 07:44:06.237667 systemd[1]: run-netns-cni\x2d954890d8\x2d6ec5\x2d89bd\x2d2ec8\x2d24087adea21d.mount: Deactivated successfully. Oct 9 07:44:06.260386 containerd[1575]: time="2024-10-09T07:44:06.260328027Z" level=info msg="TearDown network for sandbox \"7fa5396b2d335ed64fc25b8f185d8f59fcbfb2aa16b20fc01aa95073b2aa8a05\" successfully" Oct 9 07:44:06.260495 containerd[1575]: time="2024-10-09T07:44:06.260397638Z" level=info msg="StopPodSandbox for \"7fa5396b2d335ed64fc25b8f185d8f59fcbfb2aa16b20fc01aa95073b2aa8a05\" returns successfully" Oct 9 07:44:06.308574 containerd[1575]: time="2024-10-09T07:44:06.307616506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5794c8997c-zjcp4,Uid:02e73bd3-d833-4c42-ada0-6af82baf1ace,Namespace:calico-system,Attempt:1,}" Oct 9 07:44:06.628195 systemd-networkd[1211]: cali2f886da080d: Link UP Oct 9 07:44:06.628550 systemd-networkd[1211]: cali2f886da080d: Gained carrier Oct 9 07:44:06.646989 containerd[1575]: 2024-10-09 07:44:06.535 [INFO][4192] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975--2--2--3--e7db599e29.novalocal-k8s-calico--kube--controllers--5794c8997c--zjcp4-eth0 calico-kube-controllers-5794c8997c- calico-system 02e73bd3-d833-4c42-ada0-6af82baf1ace 686 0 2024-10-09 07:43:36 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5794c8997c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-3975-2-2-3-e7db599e29.novalocal calico-kube-controllers-5794c8997c-zjcp4 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali2f886da080d [] []}} ContainerID="151846675a1a8e08650d42cce80f990b8059199e543261e2f13c7b166362edd5" Namespace="calico-system" Pod="calico-kube-controllers-5794c8997c-zjcp4" WorkloadEndpoint="ci--3975--2--2--3--e7db599e29.novalocal-k8s-calico--kube--controllers--5794c8997c--zjcp4-" Oct 9 07:44:06.646989 containerd[1575]: 2024-10-09 07:44:06.535 [INFO][4192] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="151846675a1a8e08650d42cce80f990b8059199e543261e2f13c7b166362edd5" Namespace="calico-system" Pod="calico-kube-controllers-5794c8997c-zjcp4" WorkloadEndpoint="ci--3975--2--2--3--e7db599e29.novalocal-k8s-calico--kube--controllers--5794c8997c--zjcp4-eth0" Oct 9 07:44:06.646989 containerd[1575]: 2024-10-09 07:44:06.567 [INFO][4200] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="151846675a1a8e08650d42cce80f990b8059199e543261e2f13c7b166362edd5" HandleID="k8s-pod-network.151846675a1a8e08650d42cce80f990b8059199e543261e2f13c7b166362edd5" Workload="ci--3975--2--2--3--e7db599e29.novalocal-k8s-calico--kube--controllers--5794c8997c--zjcp4-eth0" Oct 9 07:44:06.646989 containerd[1575]: 2024-10-09 07:44:06.579 [INFO][4200] ipam_plugin.go 270: Auto assigning IP ContainerID="151846675a1a8e08650d42cce80f990b8059199e543261e2f13c7b166362edd5" HandleID="k8s-pod-network.151846675a1a8e08650d42cce80f990b8059199e543261e2f13c7b166362edd5" Workload="ci--3975--2--2--3--e7db599e29.novalocal-k8s-calico--kube--controllers--5794c8997c--zjcp4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000114610), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3975-2-2-3-e7db599e29.novalocal", "pod":"calico-kube-controllers-5794c8997c-zjcp4", "timestamp":"2024-10-09 07:44:06.567547433 +0000 UTC"}, Hostname:"ci-3975-2-2-3-e7db599e29.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 07:44:06.646989 containerd[1575]: 2024-10-09 07:44:06.579 [INFO][4200] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:44:06.646989 containerd[1575]: 2024-10-09 07:44:06.579 [INFO][4200] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:44:06.646989 containerd[1575]: 2024-10-09 07:44:06.579 [INFO][4200] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975-2-2-3-e7db599e29.novalocal' Oct 9 07:44:06.646989 containerd[1575]: 2024-10-09 07:44:06.582 [INFO][4200] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.151846675a1a8e08650d42cce80f990b8059199e543261e2f13c7b166362edd5" host="ci-3975-2-2-3-e7db599e29.novalocal" Oct 9 07:44:06.646989 containerd[1575]: 2024-10-09 07:44:06.593 [INFO][4200] ipam.go 372: Looking up existing affinities for host host="ci-3975-2-2-3-e7db599e29.novalocal" Oct 9 07:44:06.646989 containerd[1575]: 2024-10-09 07:44:06.598 [INFO][4200] ipam.go 489: Trying affinity for 192.168.74.0/26 host="ci-3975-2-2-3-e7db599e29.novalocal" Oct 9 07:44:06.646989 containerd[1575]: 2024-10-09 07:44:06.601 [INFO][4200] ipam.go 155: Attempting to load block cidr=192.168.74.0/26 host="ci-3975-2-2-3-e7db599e29.novalocal" Oct 9 07:44:06.646989 containerd[1575]: 2024-10-09 07:44:06.603 [INFO][4200] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.74.0/26 host="ci-3975-2-2-3-e7db599e29.novalocal" Oct 9 07:44:06.646989 containerd[1575]: 2024-10-09 07:44:06.603 [INFO][4200] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.74.0/26 handle="k8s-pod-network.151846675a1a8e08650d42cce80f990b8059199e543261e2f13c7b166362edd5" host="ci-3975-2-2-3-e7db599e29.novalocal" Oct 9 07:44:06.646989 containerd[1575]: 2024-10-09 07:44:06.605 [INFO][4200] ipam.go 1685: Creating new handle: k8s-pod-network.151846675a1a8e08650d42cce80f990b8059199e543261e2f13c7b166362edd5 Oct 9 07:44:06.646989 containerd[1575]: 2024-10-09 07:44:06.611 [INFO][4200] ipam.go 1203: Writing block in order to claim IPs block=192.168.74.0/26 handle="k8s-pod-network.151846675a1a8e08650d42cce80f990b8059199e543261e2f13c7b166362edd5" host="ci-3975-2-2-3-e7db599e29.novalocal" Oct 9 07:44:06.646989 containerd[1575]: 2024-10-09 07:44:06.621 [INFO][4200] ipam.go 1216: Successfully claimed IPs: [192.168.74.1/26] block=192.168.74.0/26 handle="k8s-pod-network.151846675a1a8e08650d42cce80f990b8059199e543261e2f13c7b166362edd5" host="ci-3975-2-2-3-e7db599e29.novalocal" Oct 9 07:44:06.646989 containerd[1575]: 2024-10-09 07:44:06.622 [INFO][4200] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.74.1/26] handle="k8s-pod-network.151846675a1a8e08650d42cce80f990b8059199e543261e2f13c7b166362edd5" host="ci-3975-2-2-3-e7db599e29.novalocal" Oct 9 07:44:06.646989 containerd[1575]: 2024-10-09 07:44:06.622 [INFO][4200] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:44:06.646989 containerd[1575]: 2024-10-09 07:44:06.622 [INFO][4200] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.74.1/26] IPv6=[] ContainerID="151846675a1a8e08650d42cce80f990b8059199e543261e2f13c7b166362edd5" HandleID="k8s-pod-network.151846675a1a8e08650d42cce80f990b8059199e543261e2f13c7b166362edd5" Workload="ci--3975--2--2--3--e7db599e29.novalocal-k8s-calico--kube--controllers--5794c8997c--zjcp4-eth0" Oct 9 07:44:06.649519 containerd[1575]: 2024-10-09 07:44:06.626 [INFO][4192] k8s.go 386: Populated endpoint ContainerID="151846675a1a8e08650d42cce80f990b8059199e543261e2f13c7b166362edd5" Namespace="calico-system" Pod="calico-kube-controllers-5794c8997c-zjcp4" WorkloadEndpoint="ci--3975--2--2--3--e7db599e29.novalocal-k8s-calico--kube--controllers--5794c8997c--zjcp4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--3--e7db599e29.novalocal-k8s-calico--kube--controllers--5794c8997c--zjcp4-eth0", GenerateName:"calico-kube-controllers-5794c8997c-", Namespace:"calico-system", SelfLink:"", UID:"02e73bd3-d833-4c42-ada0-6af82baf1ace", ResourceVersion:"686", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 43, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5794c8997c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-3-e7db599e29.novalocal", ContainerID:"", Pod:"calico-kube-controllers-5794c8997c-zjcp4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.74.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2f886da080d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:44:06.649519 containerd[1575]: 2024-10-09 07:44:06.626 [INFO][4192] k8s.go 387: Calico CNI using IPs: [192.168.74.1/32] ContainerID="151846675a1a8e08650d42cce80f990b8059199e543261e2f13c7b166362edd5" Namespace="calico-system" Pod="calico-kube-controllers-5794c8997c-zjcp4" WorkloadEndpoint="ci--3975--2--2--3--e7db599e29.novalocal-k8s-calico--kube--controllers--5794c8997c--zjcp4-eth0" Oct 9 07:44:06.649519 containerd[1575]: 2024-10-09 07:44:06.626 [INFO][4192] dataplane_linux.go 68: Setting the host side veth name to cali2f886da080d ContainerID="151846675a1a8e08650d42cce80f990b8059199e543261e2f13c7b166362edd5" Namespace="calico-system" Pod="calico-kube-controllers-5794c8997c-zjcp4" WorkloadEndpoint="ci--3975--2--2--3--e7db599e29.novalocal-k8s-calico--kube--controllers--5794c8997c--zjcp4-eth0" Oct 9 07:44:06.649519 containerd[1575]: 2024-10-09 07:44:06.628 [INFO][4192] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="151846675a1a8e08650d42cce80f990b8059199e543261e2f13c7b166362edd5" Namespace="calico-system" Pod="calico-kube-controllers-5794c8997c-zjcp4" WorkloadEndpoint="ci--3975--2--2--3--e7db599e29.novalocal-k8s-calico--kube--controllers--5794c8997c--zjcp4-eth0" Oct 9 07:44:06.649519 containerd[1575]: 2024-10-09 07:44:06.628 [INFO][4192] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="151846675a1a8e08650d42cce80f990b8059199e543261e2f13c7b166362edd5" Namespace="calico-system" Pod="calico-kube-controllers-5794c8997c-zjcp4" WorkloadEndpoint="ci--3975--2--2--3--e7db599e29.novalocal-k8s-calico--kube--controllers--5794c8997c--zjcp4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--3--e7db599e29.novalocal-k8s-calico--kube--controllers--5794c8997c--zjcp4-eth0", GenerateName:"calico-kube-controllers-5794c8997c-", Namespace:"calico-system", SelfLink:"", UID:"02e73bd3-d833-4c42-ada0-6af82baf1ace", ResourceVersion:"686", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 43, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5794c8997c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-3-e7db599e29.novalocal", ContainerID:"151846675a1a8e08650d42cce80f990b8059199e543261e2f13c7b166362edd5", Pod:"calico-kube-controllers-5794c8997c-zjcp4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.74.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2f886da080d", MAC:"6a:24:79:35:3d:d8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:44:06.649519 containerd[1575]: 2024-10-09 07:44:06.643 [INFO][4192] k8s.go 500: Wrote updated endpoint to datastore ContainerID="151846675a1a8e08650d42cce80f990b8059199e543261e2f13c7b166362edd5" Namespace="calico-system" Pod="calico-kube-controllers-5794c8997c-zjcp4" WorkloadEndpoint="ci--3975--2--2--3--e7db599e29.novalocal-k8s-calico--kube--controllers--5794c8997c--zjcp4-eth0" Oct 9 07:44:06.712746 systemd-resolved[1467]: Under memory pressure, flushing caches. Oct 9 07:44:06.714343 containerd[1575]: time="2024-10-09T07:44:06.713380060Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:44:06.714343 containerd[1575]: time="2024-10-09T07:44:06.713437488Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:44:06.714343 containerd[1575]: time="2024-10-09T07:44:06.713485127Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:44:06.714343 containerd[1575]: time="2024-10-09T07:44:06.713506848Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:44:06.716300 systemd-journald[1128]: Under memory pressure, flushing caches. Oct 9 07:44:06.712764 systemd-resolved[1467]: Flushed all caches. Oct 9 07:44:06.818074 containerd[1575]: time="2024-10-09T07:44:06.818022135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5794c8997c-zjcp4,Uid:02e73bd3-d833-4c42-ada0-6af82baf1ace,Namespace:calico-system,Attempt:1,} returns sandbox id \"151846675a1a8e08650d42cce80f990b8059199e543261e2f13c7b166362edd5\"" Oct 9 07:44:06.833727 containerd[1575]: time="2024-10-09T07:44:06.833498030Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\"" Oct 9 07:44:07.096709 systemd-networkd[1211]: vxlan.calico: Gained IPv6LL Oct 9 07:44:07.736688 systemd-networkd[1211]: cali2f886da080d: Gained IPv6LL Oct 9 07:44:08.562718 containerd[1575]: time="2024-10-09T07:44:08.562473353Z" level=info msg="StopPodSandbox for \"84624294e1c71732ae60bf3020ea314b3599d3be22a97eafb851758ab98dea88\"" Oct 9 07:44:08.583883 containerd[1575]: time="2024-10-09T07:44:08.582246462Z" level=info msg="StopPodSandbox for \"55a9fbbdf6f49909f0a562ec22933ab6c70f25fcf9d71fb9322f315c8c117886\"" Oct 9 07:44:08.586512 containerd[1575]: time="2024-10-09T07:44:08.586208875Z" level=info msg="StopPodSandbox for \"18b3e3f3469aa95a5537c35318081ddbb84ac466da19ad19c3d15adf1062c69b\"" Oct 9 07:44:08.915156 containerd[1575]: 2024-10-09 07:44:08.775 [INFO][4300] k8s.go 608: Cleaning up netns ContainerID="84624294e1c71732ae60bf3020ea314b3599d3be22a97eafb851758ab98dea88" Oct 9 07:44:08.915156 containerd[1575]: 2024-10-09 07:44:08.775 [INFO][4300] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="84624294e1c71732ae60bf3020ea314b3599d3be22a97eafb851758ab98dea88" iface="eth0" netns="/var/run/netns/cni-ee0ceeb0-e389-0d5f-ebfb-83d701c5c4c7" Oct 9 07:44:08.915156 containerd[1575]: 2024-10-09 07:44:08.776 [INFO][4300] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="84624294e1c71732ae60bf3020ea314b3599d3be22a97eafb851758ab98dea88" iface="eth0" netns="/var/run/netns/cni-ee0ceeb0-e389-0d5f-ebfb-83d701c5c4c7" Oct 9 07:44:08.915156 containerd[1575]: 2024-10-09 07:44:08.776 [INFO][4300] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="84624294e1c71732ae60bf3020ea314b3599d3be22a97eafb851758ab98dea88" iface="eth0" netns="/var/run/netns/cni-ee0ceeb0-e389-0d5f-ebfb-83d701c5c4c7" Oct 9 07:44:08.915156 containerd[1575]: 2024-10-09 07:44:08.776 [INFO][4300] k8s.go 615: Releasing IP address(es) ContainerID="84624294e1c71732ae60bf3020ea314b3599d3be22a97eafb851758ab98dea88" Oct 9 07:44:08.915156 containerd[1575]: 2024-10-09 07:44:08.776 [INFO][4300] utils.go 188: Calico CNI releasing IP address ContainerID="84624294e1c71732ae60bf3020ea314b3599d3be22a97eafb851758ab98dea88" Oct 9 07:44:08.915156 containerd[1575]: 2024-10-09 07:44:08.860 [INFO][4318] ipam_plugin.go 417: Releasing address using handleID ContainerID="84624294e1c71732ae60bf3020ea314b3599d3be22a97eafb851758ab98dea88" HandleID="k8s-pod-network.84624294e1c71732ae60bf3020ea314b3599d3be22a97eafb851758ab98dea88" Workload="ci--3975--2--2--3--e7db599e29.novalocal-k8s-coredns--76f75df574--wxf26-eth0" Oct 9 07:44:08.915156 containerd[1575]: 2024-10-09 07:44:08.874 [INFO][4318] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:44:08.915156 containerd[1575]: 2024-10-09 07:44:08.875 [INFO][4318] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:44:08.915156 containerd[1575]: 2024-10-09 07:44:08.892 [WARNING][4318] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="84624294e1c71732ae60bf3020ea314b3599d3be22a97eafb851758ab98dea88" HandleID="k8s-pod-network.84624294e1c71732ae60bf3020ea314b3599d3be22a97eafb851758ab98dea88" Workload="ci--3975--2--2--3--e7db599e29.novalocal-k8s-coredns--76f75df574--wxf26-eth0" Oct 9 07:44:08.915156 containerd[1575]: 2024-10-09 07:44:08.892 [INFO][4318] ipam_plugin.go 445: Releasing address using workloadID ContainerID="84624294e1c71732ae60bf3020ea314b3599d3be22a97eafb851758ab98dea88" HandleID="k8s-pod-network.84624294e1c71732ae60bf3020ea314b3599d3be22a97eafb851758ab98dea88" Workload="ci--3975--2--2--3--e7db599e29.novalocal-k8s-coredns--76f75df574--wxf26-eth0" Oct 9 07:44:08.915156 containerd[1575]: 2024-10-09 07:44:08.904 [INFO][4318] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:44:08.915156 containerd[1575]: 2024-10-09 07:44:08.906 [INFO][4300] k8s.go 621: Teardown processing complete. ContainerID="84624294e1c71732ae60bf3020ea314b3599d3be22a97eafb851758ab98dea88" Oct 9 07:44:08.915156 containerd[1575]: time="2024-10-09T07:44:08.914834667Z" level=info msg="TearDown network for sandbox \"84624294e1c71732ae60bf3020ea314b3599d3be22a97eafb851758ab98dea88\" successfully" Oct 9 07:44:08.915156 containerd[1575]: time="2024-10-09T07:44:08.914860175Z" level=info msg="StopPodSandbox for \"84624294e1c71732ae60bf3020ea314b3599d3be22a97eafb851758ab98dea88\" returns successfully" Oct 9 07:44:08.922731 containerd[1575]: time="2024-10-09T07:44:08.921820321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-wxf26,Uid:9094bb4d-511f-44a7-8e94-3160d3cf1a33,Namespace:kube-system,Attempt:1,}" Oct 9 07:44:08.923448 systemd[1]: run-netns-cni\x2dee0ceeb0\x2de389\x2d0d5f\x2debfb\x2d83d701c5c4c7.mount: Deactivated successfully. Oct 9 07:44:08.973462 containerd[1575]: 2024-10-09 07:44:08.811 [INFO][4299] k8s.go 608: Cleaning up netns ContainerID="55a9fbbdf6f49909f0a562ec22933ab6c70f25fcf9d71fb9322f315c8c117886" Oct 9 07:44:08.973462 containerd[1575]: 2024-10-09 07:44:08.811 [INFO][4299] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="55a9fbbdf6f49909f0a562ec22933ab6c70f25fcf9d71fb9322f315c8c117886" iface="eth0" netns="/var/run/netns/cni-d4dec42b-5749-461d-af93-7bc3b4949dde" Oct 9 07:44:08.973462 containerd[1575]: 2024-10-09 07:44:08.811 [INFO][4299] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="55a9fbbdf6f49909f0a562ec22933ab6c70f25fcf9d71fb9322f315c8c117886" iface="eth0" netns="/var/run/netns/cni-d4dec42b-5749-461d-af93-7bc3b4949dde" Oct 9 07:44:08.973462 containerd[1575]: 2024-10-09 07:44:08.811 [INFO][4299] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="55a9fbbdf6f49909f0a562ec22933ab6c70f25fcf9d71fb9322f315c8c117886" iface="eth0" netns="/var/run/netns/cni-d4dec42b-5749-461d-af93-7bc3b4949dde" Oct 9 07:44:08.973462 containerd[1575]: 2024-10-09 07:44:08.811 [INFO][4299] k8s.go 615: Releasing IP address(es) ContainerID="55a9fbbdf6f49909f0a562ec22933ab6c70f25fcf9d71fb9322f315c8c117886" Oct 9 07:44:08.973462 containerd[1575]: 2024-10-09 07:44:08.811 [INFO][4299] utils.go 188: Calico CNI releasing IP address ContainerID="55a9fbbdf6f49909f0a562ec22933ab6c70f25fcf9d71fb9322f315c8c117886" Oct 9 07:44:08.973462 containerd[1575]: 2024-10-09 07:44:08.910 [INFO][4327] ipam_plugin.go 417: Releasing address using handleID ContainerID="55a9fbbdf6f49909f0a562ec22933ab6c70f25fcf9d71fb9322f315c8c117886" HandleID="k8s-pod-network.55a9fbbdf6f49909f0a562ec22933ab6c70f25fcf9d71fb9322f315c8c117886" Workload="ci--3975--2--2--3--e7db599e29.novalocal-k8s-csi--node--driver--qq7rv-eth0" Oct 9 07:44:08.973462 containerd[1575]: 2024-10-09 07:44:08.912 [INFO][4327] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:44:08.973462 containerd[1575]: 2024-10-09 07:44:08.912 [INFO][4327] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:44:08.973462 containerd[1575]: 2024-10-09 07:44:08.936 [WARNING][4327] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="55a9fbbdf6f49909f0a562ec22933ab6c70f25fcf9d71fb9322f315c8c117886" HandleID="k8s-pod-network.55a9fbbdf6f49909f0a562ec22933ab6c70f25fcf9d71fb9322f315c8c117886" Workload="ci--3975--2--2--3--e7db599e29.novalocal-k8s-csi--node--driver--qq7rv-eth0" Oct 9 07:44:08.973462 containerd[1575]: 2024-10-09 07:44:08.937 [INFO][4327] ipam_plugin.go 445: Releasing address using workloadID ContainerID="55a9fbbdf6f49909f0a562ec22933ab6c70f25fcf9d71fb9322f315c8c117886" HandleID="k8s-pod-network.55a9fbbdf6f49909f0a562ec22933ab6c70f25fcf9d71fb9322f315c8c117886" Workload="ci--3975--2--2--3--e7db599e29.novalocal-k8s-csi--node--driver--qq7rv-eth0" Oct 9 07:44:08.973462 containerd[1575]: 2024-10-09 07:44:08.944 [INFO][4327] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:44:08.973462 containerd[1575]: 2024-10-09 07:44:08.964 [INFO][4299] k8s.go 621: Teardown processing complete. ContainerID="55a9fbbdf6f49909f0a562ec22933ab6c70f25fcf9d71fb9322f315c8c117886" Oct 9 07:44:08.975447 containerd[1575]: time="2024-10-09T07:44:08.974612678Z" level=info msg="TearDown network for sandbox \"55a9fbbdf6f49909f0a562ec22933ab6c70f25fcf9d71fb9322f315c8c117886\" successfully" Oct 9 07:44:08.975447 containerd[1575]: time="2024-10-09T07:44:08.974640339Z" level=info msg="StopPodSandbox for \"55a9fbbdf6f49909f0a562ec22933ab6c70f25fcf9d71fb9322f315c8c117886\" returns successfully" Oct 9 07:44:08.976873 containerd[1575]: time="2024-10-09T07:44:08.976579587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qq7rv,Uid:27440826-f1e0-45d4-b3d8-3225a63f893c,Namespace:calico-system,Attempt:1,}" Oct 9 07:44:08.980704 systemd[1]: run-netns-cni\x2dd4dec42b\x2d5749\x2d461d\x2daf93\x2d7bc3b4949dde.mount: Deactivated successfully. Oct 9 07:44:08.985557 containerd[1575]: 2024-10-09 07:44:08.801 [INFO][4304] k8s.go 608: Cleaning up netns ContainerID="18b3e3f3469aa95a5537c35318081ddbb84ac466da19ad19c3d15adf1062c69b" Oct 9 07:44:08.985557 containerd[1575]: 2024-10-09 07:44:08.801 [INFO][4304] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="18b3e3f3469aa95a5537c35318081ddbb84ac466da19ad19c3d15adf1062c69b" iface="eth0" netns="/var/run/netns/cni-1e37064a-5ef3-ea8d-6b05-d8034f6b9eda" Oct 9 07:44:08.985557 containerd[1575]: 2024-10-09 07:44:08.802 [INFO][4304] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="18b3e3f3469aa95a5537c35318081ddbb84ac466da19ad19c3d15adf1062c69b" iface="eth0" netns="/var/run/netns/cni-1e37064a-5ef3-ea8d-6b05-d8034f6b9eda" Oct 9 07:44:08.985557 containerd[1575]: 2024-10-09 07:44:08.802 [INFO][4304] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="18b3e3f3469aa95a5537c35318081ddbb84ac466da19ad19c3d15adf1062c69b" iface="eth0" netns="/var/run/netns/cni-1e37064a-5ef3-ea8d-6b05-d8034f6b9eda" Oct 9 07:44:08.985557 containerd[1575]: 2024-10-09 07:44:08.802 [INFO][4304] k8s.go 615: Releasing IP address(es) ContainerID="18b3e3f3469aa95a5537c35318081ddbb84ac466da19ad19c3d15adf1062c69b" Oct 9 07:44:08.985557 containerd[1575]: 2024-10-09 07:44:08.802 [INFO][4304] utils.go 188: Calico CNI releasing IP address ContainerID="18b3e3f3469aa95a5537c35318081ddbb84ac466da19ad19c3d15adf1062c69b" Oct 9 07:44:08.985557 containerd[1575]: 2024-10-09 07:44:08.912 [INFO][4323] ipam_plugin.go 417: Releasing address using handleID ContainerID="18b3e3f3469aa95a5537c35318081ddbb84ac466da19ad19c3d15adf1062c69b" HandleID="k8s-pod-network.18b3e3f3469aa95a5537c35318081ddbb84ac466da19ad19c3d15adf1062c69b" Workload="ci--3975--2--2--3--e7db599e29.novalocal-k8s-coredns--76f75df574--swjxh-eth0" Oct 9 07:44:08.985557 containerd[1575]: 2024-10-09 07:44:08.912 [INFO][4323] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:44:08.985557 containerd[1575]: 2024-10-09 07:44:08.946 [INFO][4323] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:44:08.985557 containerd[1575]: 2024-10-09 07:44:08.963 [WARNING][4323] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="18b3e3f3469aa95a5537c35318081ddbb84ac466da19ad19c3d15adf1062c69b" HandleID="k8s-pod-network.18b3e3f3469aa95a5537c35318081ddbb84ac466da19ad19c3d15adf1062c69b" Workload="ci--3975--2--2--3--e7db599e29.novalocal-k8s-coredns--76f75df574--swjxh-eth0" Oct 9 07:44:08.985557 containerd[1575]: 2024-10-09 07:44:08.963 [INFO][4323] ipam_plugin.go 445: Releasing address using workloadID ContainerID="18b3e3f3469aa95a5537c35318081ddbb84ac466da19ad19c3d15adf1062c69b" HandleID="k8s-pod-network.18b3e3f3469aa95a5537c35318081ddbb84ac466da19ad19c3d15adf1062c69b" Workload="ci--3975--2--2--3--e7db599e29.novalocal-k8s-coredns--76f75df574--swjxh-eth0" Oct 9 07:44:08.985557 containerd[1575]: 2024-10-09 07:44:08.967 [INFO][4323] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:44:08.985557 containerd[1575]: 2024-10-09 07:44:08.972 [INFO][4304] k8s.go 621: Teardown processing complete. ContainerID="18b3e3f3469aa95a5537c35318081ddbb84ac466da19ad19c3d15adf1062c69b" Oct 9 07:44:08.991680 containerd[1575]: time="2024-10-09T07:44:08.991540170Z" level=info msg="TearDown network for sandbox \"18b3e3f3469aa95a5537c35318081ddbb84ac466da19ad19c3d15adf1062c69b\" successfully" Oct 9 07:44:08.991680 containerd[1575]: time="2024-10-09T07:44:08.991589072Z" level=info msg="StopPodSandbox for \"18b3e3f3469aa95a5537c35318081ddbb84ac466da19ad19c3d15adf1062c69b\" returns successfully" Oct 9 07:44:08.996166 systemd[1]: run-netns-cni\x2d1e37064a\x2d5ef3\x2dea8d\x2d6b05\x2dd8034f6b9eda.mount: Deactivated successfully. Oct 9 07:44:08.997102 containerd[1575]: time="2024-10-09T07:44:08.996127185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-swjxh,Uid:5c1237f3-306f-4b76-aa33-787e19b1ef8a,Namespace:kube-system,Attempt:1,}" Oct 9 07:44:09.307140 systemd-networkd[1211]: cali892fd090b9a: Link UP Oct 9 07:44:09.307353 systemd-networkd[1211]: cali892fd090b9a: Gained carrier Oct 9 07:44:09.345361 containerd[1575]: 2024-10-09 07:44:09.100 [INFO][4336] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975--2--2--3--e7db599e29.novalocal-k8s-coredns--76f75df574--wxf26-eth0 coredns-76f75df574- kube-system 9094bb4d-511f-44a7-8e94-3160d3cf1a33 701 0 2024-10-09 07:43:29 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3975-2-2-3-e7db599e29.novalocal coredns-76f75df574-wxf26 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali892fd090b9a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="7bfba5c1834813295ac6ab0fac1fefb25ddd9f56ef30d9fb2f845b0ce0e2dd74" Namespace="kube-system" Pod="coredns-76f75df574-wxf26" WorkloadEndpoint="ci--3975--2--2--3--e7db599e29.novalocal-k8s-coredns--76f75df574--wxf26-" Oct 9 07:44:09.345361 containerd[1575]: 2024-10-09 07:44:09.101 [INFO][4336] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7bfba5c1834813295ac6ab0fac1fefb25ddd9f56ef30d9fb2f845b0ce0e2dd74" Namespace="kube-system" Pod="coredns-76f75df574-wxf26" WorkloadEndpoint="ci--3975--2--2--3--e7db599e29.novalocal-k8s-coredns--76f75df574--wxf26-eth0" Oct 9 07:44:09.345361 containerd[1575]: 2024-10-09 07:44:09.215 [INFO][4370] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7bfba5c1834813295ac6ab0fac1fefb25ddd9f56ef30d9fb2f845b0ce0e2dd74" HandleID="k8s-pod-network.7bfba5c1834813295ac6ab0fac1fefb25ddd9f56ef30d9fb2f845b0ce0e2dd74" Workload="ci--3975--2--2--3--e7db599e29.novalocal-k8s-coredns--76f75df574--wxf26-eth0" Oct 9 07:44:09.345361 containerd[1575]: 2024-10-09 07:44:09.230 [INFO][4370] ipam_plugin.go 270: Auto assigning IP ContainerID="7bfba5c1834813295ac6ab0fac1fefb25ddd9f56ef30d9fb2f845b0ce0e2dd74" HandleID="k8s-pod-network.7bfba5c1834813295ac6ab0fac1fefb25ddd9f56ef30d9fb2f845b0ce0e2dd74" Workload="ci--3975--2--2--3--e7db599e29.novalocal-k8s-coredns--76f75df574--wxf26-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002bee50), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3975-2-2-3-e7db599e29.novalocal", "pod":"coredns-76f75df574-wxf26", "timestamp":"2024-10-09 07:44:09.215277063 +0000 UTC"}, Hostname:"ci-3975-2-2-3-e7db599e29.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 07:44:09.345361 containerd[1575]: 2024-10-09 07:44:09.230 [INFO][4370] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:44:09.345361 containerd[1575]: 2024-10-09 07:44:09.230 [INFO][4370] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:44:09.345361 containerd[1575]: 2024-10-09 07:44:09.231 [INFO][4370] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975-2-2-3-e7db599e29.novalocal' Oct 9 07:44:09.345361 containerd[1575]: 2024-10-09 07:44:09.236 [INFO][4370] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7bfba5c1834813295ac6ab0fac1fefb25ddd9f56ef30d9fb2f845b0ce0e2dd74" host="ci-3975-2-2-3-e7db599e29.novalocal" Oct 9 07:44:09.345361 containerd[1575]: 2024-10-09 07:44:09.247 [INFO][4370] ipam.go 372: Looking up existing affinities for host host="ci-3975-2-2-3-e7db599e29.novalocal" Oct 9 07:44:09.345361 containerd[1575]: 2024-10-09 07:44:09.259 [INFO][4370] ipam.go 489: Trying affinity for 192.168.74.0/26 host="ci-3975-2-2-3-e7db599e29.novalocal" Oct 9 07:44:09.345361 containerd[1575]: 2024-10-09 07:44:09.263 [INFO][4370] ipam.go 155: Attempting to load block cidr=192.168.74.0/26 host="ci-3975-2-2-3-e7db599e29.novalocal" Oct 9 07:44:09.345361 containerd[1575]: 2024-10-09 07:44:09.270 [INFO][4370] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.74.0/26 host="ci-3975-2-2-3-e7db599e29.novalocal" Oct 9 07:44:09.345361 containerd[1575]: 2024-10-09 07:44:09.270 [INFO][4370] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.74.0/26 handle="k8s-pod-network.7bfba5c1834813295ac6ab0fac1fefb25ddd9f56ef30d9fb2f845b0ce0e2dd74" host="ci-3975-2-2-3-e7db599e29.novalocal" Oct 9 07:44:09.345361 containerd[1575]: 2024-10-09 07:44:09.272 [INFO][4370] ipam.go 1685: Creating new handle: k8s-pod-network.7bfba5c1834813295ac6ab0fac1fefb25ddd9f56ef30d9fb2f845b0ce0e2dd74 Oct 9 07:44:09.345361 containerd[1575]: 2024-10-09 07:44:09.283 [INFO][4370] ipam.go 1203: Writing block in order to claim IPs block=192.168.74.0/26 handle="k8s-pod-network.7bfba5c1834813295ac6ab0fac1fefb25ddd9f56ef30d9fb2f845b0ce0e2dd74" host="ci-3975-2-2-3-e7db599e29.novalocal" Oct 9 07:44:09.345361 containerd[1575]: 2024-10-09 07:44:09.292 [INFO][4370] ipam.go 1216: Successfully claimed IPs: [192.168.74.2/26] block=192.168.74.0/26 handle="k8s-pod-network.7bfba5c1834813295ac6ab0fac1fefb25ddd9f56ef30d9fb2f845b0ce0e2dd74" host="ci-3975-2-2-3-e7db599e29.novalocal" Oct 9 07:44:09.345361 containerd[1575]: 2024-10-09 07:44:09.292 [INFO][4370] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.74.2/26] handle="k8s-pod-network.7bfba5c1834813295ac6ab0fac1fefb25ddd9f56ef30d9fb2f845b0ce0e2dd74" host="ci-3975-2-2-3-e7db599e29.novalocal" Oct 9 07:44:09.345361 containerd[1575]: 2024-10-09 07:44:09.292 [INFO][4370] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:44:09.345361 containerd[1575]: 2024-10-09 07:44:09.292 [INFO][4370] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.74.2/26] IPv6=[] ContainerID="7bfba5c1834813295ac6ab0fac1fefb25ddd9f56ef30d9fb2f845b0ce0e2dd74" HandleID="k8s-pod-network.7bfba5c1834813295ac6ab0fac1fefb25ddd9f56ef30d9fb2f845b0ce0e2dd74" Workload="ci--3975--2--2--3--e7db599e29.novalocal-k8s-coredns--76f75df574--wxf26-eth0" Oct 9 07:44:09.346489 containerd[1575]: 2024-10-09 07:44:09.299 [INFO][4336] k8s.go 386: Populated endpoint ContainerID="7bfba5c1834813295ac6ab0fac1fefb25ddd9f56ef30d9fb2f845b0ce0e2dd74" Namespace="kube-system" Pod="coredns-76f75df574-wxf26" WorkloadEndpoint="ci--3975--2--2--3--e7db599e29.novalocal-k8s-coredns--76f75df574--wxf26-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--3--e7db599e29.novalocal-k8s-coredns--76f75df574--wxf26-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"9094bb4d-511f-44a7-8e94-3160d3cf1a33", ResourceVersion:"701", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 43, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-3-e7db599e29.novalocal", ContainerID:"", Pod:"coredns-76f75df574-wxf26", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.74.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali892fd090b9a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:44:09.346489 containerd[1575]: 2024-10-09 07:44:09.300 [INFO][4336] k8s.go 387: Calico CNI using IPs: [192.168.74.2/32] ContainerID="7bfba5c1834813295ac6ab0fac1fefb25ddd9f56ef30d9fb2f845b0ce0e2dd74" Namespace="kube-system" Pod="coredns-76f75df574-wxf26" WorkloadEndpoint="ci--3975--2--2--3--e7db599e29.novalocal-k8s-coredns--76f75df574--wxf26-eth0" Oct 9 07:44:09.346489 containerd[1575]: 2024-10-09 07:44:09.300 [INFO][4336] dataplane_linux.go 68: Setting the host side veth name to cali892fd090b9a ContainerID="7bfba5c1834813295ac6ab0fac1fefb25ddd9f56ef30d9fb2f845b0ce0e2dd74" Namespace="kube-system" Pod="coredns-76f75df574-wxf26" WorkloadEndpoint="ci--3975--2--2--3--e7db599e29.novalocal-k8s-coredns--76f75df574--wxf26-eth0" Oct 9 07:44:09.346489 containerd[1575]: 2024-10-09 07:44:09.306 [INFO][4336] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="7bfba5c1834813295ac6ab0fac1fefb25ddd9f56ef30d9fb2f845b0ce0e2dd74" Namespace="kube-system" Pod="coredns-76f75df574-wxf26" WorkloadEndpoint="ci--3975--2--2--3--e7db599e29.novalocal-k8s-coredns--76f75df574--wxf26-eth0" Oct 9 07:44:09.346489 containerd[1575]: 2024-10-09 07:44:09.314 [INFO][4336] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7bfba5c1834813295ac6ab0fac1fefb25ddd9f56ef30d9fb2f845b0ce0e2dd74" Namespace="kube-system" Pod="coredns-76f75df574-wxf26" WorkloadEndpoint="ci--3975--2--2--3--e7db599e29.novalocal-k8s-coredns--76f75df574--wxf26-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--3--e7db599e29.novalocal-k8s-coredns--76f75df574--wxf26-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"9094bb4d-511f-44a7-8e94-3160d3cf1a33", ResourceVersion:"701", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 43, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-3-e7db599e29.novalocal", ContainerID:"7bfba5c1834813295ac6ab0fac1fefb25ddd9f56ef30d9fb2f845b0ce0e2dd74", Pod:"coredns-76f75df574-wxf26", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.74.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali892fd090b9a", MAC:"8e:6e:c7:31:e4:a1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:44:09.346489 containerd[1575]: 2024-10-09 07:44:09.338 [INFO][4336] k8s.go 500: Wrote updated endpoint to datastore ContainerID="7bfba5c1834813295ac6ab0fac1fefb25ddd9f56ef30d9fb2f845b0ce0e2dd74" Namespace="kube-system" Pod="coredns-76f75df574-wxf26" WorkloadEndpoint="ci--3975--2--2--3--e7db599e29.novalocal-k8s-coredns--76f75df574--wxf26-eth0" Oct 9 07:44:09.386187 systemd-networkd[1211]: cali5aa07a2ac32: Link UP Oct 9 07:44:09.386791 systemd-networkd[1211]: cali5aa07a2ac32: Gained carrier Oct 9 07:44:09.422120 containerd[1575]: 2024-10-09 07:44:09.117 [INFO][4346] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975--2--2--3--e7db599e29.novalocal-k8s-csi--node--driver--qq7rv-eth0 csi-node-driver- calico-system 27440826-f1e0-45d4-b3d8-3225a63f893c 703 0 2024-10-09 07:43:36 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:78cd84fb8c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ci-3975-2-2-3-e7db599e29.novalocal csi-node-driver-qq7rv eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali5aa07a2ac32 [] []}} ContainerID="267977aa4dc7cb863c8ee034ea6cf34f7852eda514ac9bbcb5898daee9e185c5" Namespace="calico-system" Pod="csi-node-driver-qq7rv" WorkloadEndpoint="ci--3975--2--2--3--e7db599e29.novalocal-k8s-csi--node--driver--qq7rv-" Oct 9 07:44:09.422120 containerd[1575]: 2024-10-09 07:44:09.118 [INFO][4346] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="267977aa4dc7cb863c8ee034ea6cf34f7852eda514ac9bbcb5898daee9e185c5" Namespace="calico-system" Pod="csi-node-driver-qq7rv" WorkloadEndpoint="ci--3975--2--2--3--e7db599e29.novalocal-k8s-csi--node--driver--qq7rv-eth0" Oct 9 07:44:09.422120 containerd[1575]: 2024-10-09 07:44:09.277 [INFO][4374] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="267977aa4dc7cb863c8ee034ea6cf34f7852eda514ac9bbcb5898daee9e185c5" HandleID="k8s-pod-network.267977aa4dc7cb863c8ee034ea6cf34f7852eda514ac9bbcb5898daee9e185c5" Workload="ci--3975--2--2--3--e7db599e29.novalocal-k8s-csi--node--driver--qq7rv-eth0" Oct 9 07:44:09.422120 containerd[1575]: 2024-10-09 07:44:09.297 [INFO][4374] ipam_plugin.go 270: Auto assigning IP ContainerID="267977aa4dc7cb863c8ee034ea6cf34f7852eda514ac9bbcb5898daee9e185c5" HandleID="k8s-pod-network.267977aa4dc7cb863c8ee034ea6cf34f7852eda514ac9bbcb5898daee9e185c5" Workload="ci--3975--2--2--3--e7db599e29.novalocal-k8s-csi--node--driver--qq7rv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00034d5f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3975-2-2-3-e7db599e29.novalocal", "pod":"csi-node-driver-qq7rv", "timestamp":"2024-10-09 07:44:09.277093928 +0000 UTC"}, Hostname:"ci-3975-2-2-3-e7db599e29.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 07:44:09.422120 containerd[1575]: 2024-10-09 07:44:09.297 [INFO][4374] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:44:09.422120 containerd[1575]: 2024-10-09 07:44:09.297 [INFO][4374] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:44:09.422120 containerd[1575]: 2024-10-09 07:44:09.297 [INFO][4374] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975-2-2-3-e7db599e29.novalocal' Oct 9 07:44:09.422120 containerd[1575]: 2024-10-09 07:44:09.301 [INFO][4374] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.267977aa4dc7cb863c8ee034ea6cf34f7852eda514ac9bbcb5898daee9e185c5" host="ci-3975-2-2-3-e7db599e29.novalocal" Oct 9 07:44:09.422120 containerd[1575]: 2024-10-09 07:44:09.316 [INFO][4374] ipam.go 372: Looking up existing affinities for host host="ci-3975-2-2-3-e7db599e29.novalocal" Oct 9 07:44:09.422120 containerd[1575]: 2024-10-09 07:44:09.340 [INFO][4374] ipam.go 489: Trying affinity for 192.168.74.0/26 host="ci-3975-2-2-3-e7db599e29.novalocal" Oct 9 07:44:09.422120 containerd[1575]: 2024-10-09 07:44:09.347 [INFO][4374] ipam.go 155: Attempting to load block cidr=192.168.74.0/26 host="ci-3975-2-2-3-e7db599e29.novalocal" Oct 9 07:44:09.422120 containerd[1575]: 2024-10-09 07:44:09.357 [INFO][4374] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.74.0/26 host="ci-3975-2-2-3-e7db599e29.novalocal" Oct 9 07:44:09.422120 containerd[1575]: 2024-10-09 07:44:09.358 [INFO][4374] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.74.0/26 handle="k8s-pod-network.267977aa4dc7cb863c8ee034ea6cf34f7852eda514ac9bbcb5898daee9e185c5" host="ci-3975-2-2-3-e7db599e29.novalocal" Oct 9 07:44:09.422120 containerd[1575]: 2024-10-09 07:44:09.360 [INFO][4374] ipam.go 1685: Creating new handle: k8s-pod-network.267977aa4dc7cb863c8ee034ea6cf34f7852eda514ac9bbcb5898daee9e185c5 Oct 9 07:44:09.422120 containerd[1575]: 2024-10-09 07:44:09.368 [INFO][4374] ipam.go 1203: Writing block in order to claim IPs block=192.168.74.0/26 handle="k8s-pod-network.267977aa4dc7cb863c8ee034ea6cf34f7852eda514ac9bbcb5898daee9e185c5" host="ci-3975-2-2-3-e7db599e29.novalocal" Oct 9 07:44:09.422120 containerd[1575]: 2024-10-09 07:44:09.376 [INFO][4374] ipam.go 1216: Successfully claimed IPs: [192.168.74.3/26] block=192.168.74.0/26 handle="k8s-pod-network.267977aa4dc7cb863c8ee034ea6cf34f7852eda514ac9bbcb5898daee9e185c5" host="ci-3975-2-2-3-e7db599e29.novalocal" Oct 9 07:44:09.422120 containerd[1575]: 2024-10-09 07:44:09.378 [INFO][4374] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.74.3/26] handle="k8s-pod-network.267977aa4dc7cb863c8ee034ea6cf34f7852eda514ac9bbcb5898daee9e185c5" host="ci-3975-2-2-3-e7db599e29.novalocal" Oct 9 07:44:09.422120 containerd[1575]: 2024-10-09 07:44:09.378 [INFO][4374] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:44:09.422120 containerd[1575]: 2024-10-09 07:44:09.378 [INFO][4374] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.74.3/26] IPv6=[] ContainerID="267977aa4dc7cb863c8ee034ea6cf34f7852eda514ac9bbcb5898daee9e185c5" HandleID="k8s-pod-network.267977aa4dc7cb863c8ee034ea6cf34f7852eda514ac9bbcb5898daee9e185c5" Workload="ci--3975--2--2--3--e7db599e29.novalocal-k8s-csi--node--driver--qq7rv-eth0" Oct 9 07:44:09.424937 containerd[1575]: 2024-10-09 07:44:09.381 [INFO][4346] k8s.go 386: Populated endpoint ContainerID="267977aa4dc7cb863c8ee034ea6cf34f7852eda514ac9bbcb5898daee9e185c5" Namespace="calico-system" Pod="csi-node-driver-qq7rv" WorkloadEndpoint="ci--3975--2--2--3--e7db599e29.novalocal-k8s-csi--node--driver--qq7rv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--3--e7db599e29.novalocal-k8s-csi--node--driver--qq7rv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"27440826-f1e0-45d4-b3d8-3225a63f893c", ResourceVersion:"703", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 43, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-3-e7db599e29.novalocal", ContainerID:"", Pod:"csi-node-driver-qq7rv", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.74.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali5aa07a2ac32", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:44:09.424937 containerd[1575]: 2024-10-09 07:44:09.381 [INFO][4346] k8s.go 387: Calico CNI using IPs: [192.168.74.3/32] ContainerID="267977aa4dc7cb863c8ee034ea6cf34f7852eda514ac9bbcb5898daee9e185c5" Namespace="calico-system" Pod="csi-node-driver-qq7rv" WorkloadEndpoint="ci--3975--2--2--3--e7db599e29.novalocal-k8s-csi--node--driver--qq7rv-eth0" Oct 9 07:44:09.424937 containerd[1575]: 2024-10-09 07:44:09.381 [INFO][4346] dataplane_linux.go 68: Setting the host side veth name to cali5aa07a2ac32 ContainerID="267977aa4dc7cb863c8ee034ea6cf34f7852eda514ac9bbcb5898daee9e185c5" Namespace="calico-system" Pod="csi-node-driver-qq7rv" WorkloadEndpoint="ci--3975--2--2--3--e7db599e29.novalocal-k8s-csi--node--driver--qq7rv-eth0" Oct 9 07:44:09.424937 containerd[1575]: 2024-10-09 07:44:09.386 [INFO][4346] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="267977aa4dc7cb863c8ee034ea6cf34f7852eda514ac9bbcb5898daee9e185c5" Namespace="calico-system" Pod="csi-node-driver-qq7rv" WorkloadEndpoint="ci--3975--2--2--3--e7db599e29.novalocal-k8s-csi--node--driver--qq7rv-eth0" Oct 9 07:44:09.424937 containerd[1575]: 2024-10-09 07:44:09.389 [INFO][4346] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="267977aa4dc7cb863c8ee034ea6cf34f7852eda514ac9bbcb5898daee9e185c5" Namespace="calico-system" Pod="csi-node-driver-qq7rv" WorkloadEndpoint="ci--3975--2--2--3--e7db599e29.novalocal-k8s-csi--node--driver--qq7rv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--3--e7db599e29.novalocal-k8s-csi--node--driver--qq7rv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"27440826-f1e0-45d4-b3d8-3225a63f893c", ResourceVersion:"703", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 43, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-3-e7db599e29.novalocal", ContainerID:"267977aa4dc7cb863c8ee034ea6cf34f7852eda514ac9bbcb5898daee9e185c5", Pod:"csi-node-driver-qq7rv", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.74.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali5aa07a2ac32", MAC:"96:28:88:1f:13:71", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:44:09.424937 containerd[1575]: 2024-10-09 07:44:09.417 [INFO][4346] k8s.go 500: Wrote updated endpoint to datastore ContainerID="267977aa4dc7cb863c8ee034ea6cf34f7852eda514ac9bbcb5898daee9e185c5" Namespace="calico-system" Pod="csi-node-driver-qq7rv" WorkloadEndpoint="ci--3975--2--2--3--e7db599e29.novalocal-k8s-csi--node--driver--qq7rv-eth0" Oct 9 07:44:09.445553 containerd[1575]: time="2024-10-09T07:44:09.444377772Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:44:09.445553 containerd[1575]: time="2024-10-09T07:44:09.444434478Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:44:09.445553 containerd[1575]: time="2024-10-09T07:44:09.444468041Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:44:09.445553 containerd[1575]: time="2024-10-09T07:44:09.444486365Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:44:09.472151 systemd-networkd[1211]: cali88e70e179be: Link UP Oct 9 07:44:09.472362 systemd-networkd[1211]: cali88e70e179be: Gained carrier Oct 9 07:44:09.499860 containerd[1575]: 2024-10-09 07:44:09.224 [INFO][4358] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975--2--2--3--e7db599e29.novalocal-k8s-coredns--76f75df574--swjxh-eth0 coredns-76f75df574- kube-system 5c1237f3-306f-4b76-aa33-787e19b1ef8a 702 0 2024-10-09 07:43:29 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3975-2-2-3-e7db599e29.novalocal coredns-76f75df574-swjxh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali88e70e179be [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="d32825426b2a4824835f8a42b5787dc690e269c86357a6620d4c5c5ec2fb8ef9" Namespace="kube-system" Pod="coredns-76f75df574-swjxh" WorkloadEndpoint="ci--3975--2--2--3--e7db599e29.novalocal-k8s-coredns--76f75df574--swjxh-" Oct 9 07:44:09.499860 containerd[1575]: 2024-10-09 07:44:09.224 [INFO][4358] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d32825426b2a4824835f8a42b5787dc690e269c86357a6620d4c5c5ec2fb8ef9" Namespace="kube-system" Pod="coredns-76f75df574-swjxh" WorkloadEndpoint="ci--3975--2--2--3--e7db599e29.novalocal-k8s-coredns--76f75df574--swjxh-eth0" Oct 9 07:44:09.499860 containerd[1575]: 2024-10-09 07:44:09.347 [INFO][4383] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d32825426b2a4824835f8a42b5787dc690e269c86357a6620d4c5c5ec2fb8ef9" HandleID="k8s-pod-network.d32825426b2a4824835f8a42b5787dc690e269c86357a6620d4c5c5ec2fb8ef9" Workload="ci--3975--2--2--3--e7db599e29.novalocal-k8s-coredns--76f75df574--swjxh-eth0" Oct 9 07:44:09.499860 containerd[1575]: 2024-10-09 07:44:09.362 [INFO][4383] ipam_plugin.go 270: Auto assigning IP ContainerID="d32825426b2a4824835f8a42b5787dc690e269c86357a6620d4c5c5ec2fb8ef9" HandleID="k8s-pod-network.d32825426b2a4824835f8a42b5787dc690e269c86357a6620d4c5c5ec2fb8ef9" Workload="ci--3975--2--2--3--e7db599e29.novalocal-k8s-coredns--76f75df574--swjxh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000502b0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3975-2-2-3-e7db599e29.novalocal", "pod":"coredns-76f75df574-swjxh", "timestamp":"2024-10-09 07:44:09.347942054 +0000 UTC"}, Hostname:"ci-3975-2-2-3-e7db599e29.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 07:44:09.499860 containerd[1575]: 2024-10-09 07:44:09.362 [INFO][4383] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:44:09.499860 containerd[1575]: 2024-10-09 07:44:09.378 [INFO][4383] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:44:09.499860 containerd[1575]: 2024-10-09 07:44:09.378 [INFO][4383] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975-2-2-3-e7db599e29.novalocal' Oct 9 07:44:09.499860 containerd[1575]: 2024-10-09 07:44:09.381 [INFO][4383] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d32825426b2a4824835f8a42b5787dc690e269c86357a6620d4c5c5ec2fb8ef9" host="ci-3975-2-2-3-e7db599e29.novalocal" Oct 9 07:44:09.499860 containerd[1575]: 2024-10-09 07:44:09.398 [INFO][4383] ipam.go 372: Looking up existing affinities for host host="ci-3975-2-2-3-e7db599e29.novalocal" Oct 9 07:44:09.499860 containerd[1575]: 2024-10-09 07:44:09.407 [INFO][4383] ipam.go 489: Trying affinity for 192.168.74.0/26 host="ci-3975-2-2-3-e7db599e29.novalocal" Oct 9 07:44:09.499860 containerd[1575]: 2024-10-09 07:44:09.413 [INFO][4383] ipam.go 155: Attempting to load block cidr=192.168.74.0/26 host="ci-3975-2-2-3-e7db599e29.novalocal" Oct 9 07:44:09.499860 containerd[1575]: 2024-10-09 07:44:09.419 [INFO][4383] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.74.0/26 host="ci-3975-2-2-3-e7db599e29.novalocal" Oct 9 07:44:09.499860 containerd[1575]: 2024-10-09 07:44:09.420 [INFO][4383] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.74.0/26 handle="k8s-pod-network.d32825426b2a4824835f8a42b5787dc690e269c86357a6620d4c5c5ec2fb8ef9" host="ci-3975-2-2-3-e7db599e29.novalocal" Oct 9 07:44:09.499860 containerd[1575]: 2024-10-09 07:44:09.428 [INFO][4383] ipam.go 1685: Creating new handle: k8s-pod-network.d32825426b2a4824835f8a42b5787dc690e269c86357a6620d4c5c5ec2fb8ef9 Oct 9 07:44:09.499860 containerd[1575]: 2024-10-09 07:44:09.444 [INFO][4383] ipam.go 1203: Writing block in order to claim IPs block=192.168.74.0/26 handle="k8s-pod-network.d32825426b2a4824835f8a42b5787dc690e269c86357a6620d4c5c5ec2fb8ef9" host="ci-3975-2-2-3-e7db599e29.novalocal" Oct 9 07:44:09.499860 containerd[1575]: 2024-10-09 07:44:09.457 [INFO][4383] ipam.go 1216: Successfully claimed IPs: [192.168.74.4/26] block=192.168.74.0/26 handle="k8s-pod-network.d32825426b2a4824835f8a42b5787dc690e269c86357a6620d4c5c5ec2fb8ef9" host="ci-3975-2-2-3-e7db599e29.novalocal" Oct 9 07:44:09.499860 containerd[1575]: 2024-10-09 07:44:09.457 [INFO][4383] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.74.4/26] handle="k8s-pod-network.d32825426b2a4824835f8a42b5787dc690e269c86357a6620d4c5c5ec2fb8ef9" host="ci-3975-2-2-3-e7db599e29.novalocal" Oct 9 07:44:09.499860 containerd[1575]: 2024-10-09 07:44:09.458 [INFO][4383] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:44:09.499860 containerd[1575]: 2024-10-09 07:44:09.458 [INFO][4383] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.74.4/26] IPv6=[] ContainerID="d32825426b2a4824835f8a42b5787dc690e269c86357a6620d4c5c5ec2fb8ef9" HandleID="k8s-pod-network.d32825426b2a4824835f8a42b5787dc690e269c86357a6620d4c5c5ec2fb8ef9" Workload="ci--3975--2--2--3--e7db599e29.novalocal-k8s-coredns--76f75df574--swjxh-eth0" Oct 9 07:44:09.501754 containerd[1575]: 2024-10-09 07:44:09.465 [INFO][4358] k8s.go 386: Populated endpoint ContainerID="d32825426b2a4824835f8a42b5787dc690e269c86357a6620d4c5c5ec2fb8ef9" Namespace="kube-system" Pod="coredns-76f75df574-swjxh" WorkloadEndpoint="ci--3975--2--2--3--e7db599e29.novalocal-k8s-coredns--76f75df574--swjxh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--3--e7db599e29.novalocal-k8s-coredns--76f75df574--swjxh-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"5c1237f3-306f-4b76-aa33-787e19b1ef8a", ResourceVersion:"702", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 43, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-3-e7db599e29.novalocal", ContainerID:"", Pod:"coredns-76f75df574-swjxh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.74.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali88e70e179be", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:44:09.501754 containerd[1575]: 2024-10-09 07:44:09.465 [INFO][4358] k8s.go 387: Calico CNI using IPs: [192.168.74.4/32] ContainerID="d32825426b2a4824835f8a42b5787dc690e269c86357a6620d4c5c5ec2fb8ef9" Namespace="kube-system" Pod="coredns-76f75df574-swjxh" WorkloadEndpoint="ci--3975--2--2--3--e7db599e29.novalocal-k8s-coredns--76f75df574--swjxh-eth0" Oct 9 07:44:09.501754 containerd[1575]: 2024-10-09 07:44:09.466 [INFO][4358] dataplane_linux.go 68: Setting the host side veth name to cali88e70e179be ContainerID="d32825426b2a4824835f8a42b5787dc690e269c86357a6620d4c5c5ec2fb8ef9" Namespace="kube-system" Pod="coredns-76f75df574-swjxh" WorkloadEndpoint="ci--3975--2--2--3--e7db599e29.novalocal-k8s-coredns--76f75df574--swjxh-eth0" Oct 9 07:44:09.501754 containerd[1575]: 2024-10-09 07:44:09.468 [INFO][4358] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="d32825426b2a4824835f8a42b5787dc690e269c86357a6620d4c5c5ec2fb8ef9" Namespace="kube-system" Pod="coredns-76f75df574-swjxh" WorkloadEndpoint="ci--3975--2--2--3--e7db599e29.novalocal-k8s-coredns--76f75df574--swjxh-eth0" Oct 9 07:44:09.501754 containerd[1575]: 2024-10-09 07:44:09.469 [INFO][4358] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d32825426b2a4824835f8a42b5787dc690e269c86357a6620d4c5c5ec2fb8ef9" Namespace="kube-system" Pod="coredns-76f75df574-swjxh" WorkloadEndpoint="ci--3975--2--2--3--e7db599e29.novalocal-k8s-coredns--76f75df574--swjxh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--3--e7db599e29.novalocal-k8s-coredns--76f75df574--swjxh-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"5c1237f3-306f-4b76-aa33-787e19b1ef8a", ResourceVersion:"702", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 43, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-3-e7db599e29.novalocal", ContainerID:"d32825426b2a4824835f8a42b5787dc690e269c86357a6620d4c5c5ec2fb8ef9", Pod:"coredns-76f75df574-swjxh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.74.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali88e70e179be", MAC:"06:96:88:62:f0:37", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:44:09.501754 containerd[1575]: 2024-10-09 07:44:09.488 [INFO][4358] k8s.go 500: Wrote updated endpoint to datastore ContainerID="d32825426b2a4824835f8a42b5787dc690e269c86357a6620d4c5c5ec2fb8ef9" Namespace="kube-system" Pod="coredns-76f75df574-swjxh" WorkloadEndpoint="ci--3975--2--2--3--e7db599e29.novalocal-k8s-coredns--76f75df574--swjxh-eth0" Oct 9 07:44:09.583604 containerd[1575]: time="2024-10-09T07:44:09.582159680Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:44:09.585454 containerd[1575]: time="2024-10-09T07:44:09.585412111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:44:09.586548 containerd[1575]: time="2024-10-09T07:44:09.586413691Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:44:09.586672 containerd[1575]: time="2024-10-09T07:44:09.586633824Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:44:09.616424 containerd[1575]: time="2024-10-09T07:44:09.616220076Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:44:09.617894 containerd[1575]: time="2024-10-09T07:44:09.616823257Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:44:09.617894 containerd[1575]: time="2024-10-09T07:44:09.617754493Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:44:09.617894 containerd[1575]: time="2024-10-09T07:44:09.617770964Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:44:09.651508 containerd[1575]: time="2024-10-09T07:44:09.650898510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-wxf26,Uid:9094bb4d-511f-44a7-8e94-3160d3cf1a33,Namespace:kube-system,Attempt:1,} returns sandbox id \"7bfba5c1834813295ac6ab0fac1fefb25ddd9f56ef30d9fb2f845b0ce0e2dd74\"" Oct 9 07:44:09.667078 containerd[1575]: time="2024-10-09T07:44:09.667027216Z" level=info msg="CreateContainer within sandbox \"7bfba5c1834813295ac6ab0fac1fefb25ddd9f56ef30d9fb2f845b0ce0e2dd74\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 9 07:44:09.723274 containerd[1575]: time="2024-10-09T07:44:09.723231822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qq7rv,Uid:27440826-f1e0-45d4-b3d8-3225a63f893c,Namespace:calico-system,Attempt:1,} returns sandbox id \"267977aa4dc7cb863c8ee034ea6cf34f7852eda514ac9bbcb5898daee9e185c5\"" Oct 9 07:44:09.736392 containerd[1575]: time="2024-10-09T07:44:09.735325849Z" level=info msg="CreateContainer within sandbox \"7bfba5c1834813295ac6ab0fac1fefb25ddd9f56ef30d9fb2f845b0ce0e2dd74\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"66b43a0db328d79359f282f49859e15685c67ac569dfca1fb8a0678836680239\"" Oct 9 07:44:09.737673 containerd[1575]: time="2024-10-09T07:44:09.737644850Z" level=info msg="StartContainer for \"66b43a0db328d79359f282f49859e15685c67ac569dfca1fb8a0678836680239\"" Oct 9 07:44:09.769924 containerd[1575]: time="2024-10-09T07:44:09.769887805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-swjxh,Uid:5c1237f3-306f-4b76-aa33-787e19b1ef8a,Namespace:kube-system,Attempt:1,} returns sandbox id \"d32825426b2a4824835f8a42b5787dc690e269c86357a6620d4c5c5ec2fb8ef9\"" Oct 9 07:44:09.774009 containerd[1575]: time="2024-10-09T07:44:09.773963882Z" level=info msg="CreateContainer within sandbox \"d32825426b2a4824835f8a42b5787dc690e269c86357a6620d4c5c5ec2fb8ef9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 9 07:44:09.817717 containerd[1575]: time="2024-10-09T07:44:09.815551417Z" level=info msg="CreateContainer within sandbox \"d32825426b2a4824835f8a42b5787dc690e269c86357a6620d4c5c5ec2fb8ef9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d40f35d9d846e2c9b265b8f6c8f3ae84beaf0b58855f881376d8f8502ea13c64\"" Oct 9 07:44:09.827300 containerd[1575]: time="2024-10-09T07:44:09.827011646Z" level=info msg="StartContainer for \"d40f35d9d846e2c9b265b8f6c8f3ae84beaf0b58855f881376d8f8502ea13c64\"" Oct 9 07:44:09.905608 containerd[1575]: time="2024-10-09T07:44:09.905076030Z" level=info msg="StartContainer for \"66b43a0db328d79359f282f49859e15685c67ac569dfca1fb8a0678836680239\" returns successfully" Oct 9 07:44:10.062583 containerd[1575]: time="2024-10-09T07:44:10.061905499Z" level=info msg="StartContainer for \"d40f35d9d846e2c9b265b8f6c8f3ae84beaf0b58855f881376d8f8502ea13c64\" returns successfully" Oct 9 07:44:10.553014 systemd-networkd[1211]: cali88e70e179be: Gained IPv6LL Oct 9 07:44:10.624392 systemd-journald[1128]: Under memory pressure, flushing caches. Oct 9 07:44:10.618712 systemd-resolved[1467]: Under memory pressure, flushing caches. Oct 9 07:44:10.620565 systemd-resolved[1467]: Flushed all caches. Oct 9 07:44:10.683477 systemd-networkd[1211]: cali892fd090b9a: Gained IPv6LL Oct 9 07:44:10.808715 systemd-networkd[1211]: cali5aa07a2ac32: Gained IPv6LL Oct 9 07:44:11.051034 kubelet[2836]: I1009 07:44:11.050858 2836 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-wxf26" podStartSLOduration=42.0385996 podStartE2EDuration="42.0385996s" podCreationTimestamp="2024-10-09 07:43:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 07:44:10.053000593 +0000 UTC m=+54.849093265" watchObservedRunningTime="2024-10-09 07:44:11.0385996 +0000 UTC m=+55.834692242" Oct 9 07:44:11.053125 kubelet[2836]: I1009 07:44:11.051674 2836 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-swjxh" podStartSLOduration=42.05154594 podStartE2EDuration="42.05154594s" podCreationTimestamp="2024-10-09 07:43:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 07:44:11.036900943 +0000 UTC m=+55.832993595" watchObservedRunningTime="2024-10-09 07:44:11.05154594 +0000 UTC m=+55.847638592" Oct 9 07:44:11.709334 containerd[1575]: time="2024-10-09T07:44:11.709275996Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:44:11.715343 containerd[1575]: time="2024-10-09T07:44:11.715306410Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.1: active requests=0, bytes read=33507125" Oct 9 07:44:11.722254 containerd[1575]: time="2024-10-09T07:44:11.722201347Z" level=info msg="ImageCreate event name:\"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:44:11.725060 containerd[1575]: time="2024-10-09T07:44:11.724984809Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:44:11.725965 containerd[1575]: time="2024-10-09T07:44:11.725700472Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" with image id \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\", size \"34999494\" in 4.892144753s" Oct 9 07:44:11.725965 containerd[1575]: time="2024-10-09T07:44:11.725739405Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" returns image reference \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\"" Oct 9 07:44:11.726832 containerd[1575]: time="2024-10-09T07:44:11.726633121Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\"" Oct 9 07:44:11.760318 containerd[1575]: time="2024-10-09T07:44:11.760189190Z" level=info msg="CreateContainer within sandbox \"151846675a1a8e08650d42cce80f990b8059199e543261e2f13c7b166362edd5\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Oct 9 07:44:11.784822 containerd[1575]: time="2024-10-09T07:44:11.784779586Z" level=info msg="CreateContainer within sandbox \"151846675a1a8e08650d42cce80f990b8059199e543261e2f13c7b166362edd5\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"4e3087a0b8c7479300bf25660589c2b4a81474cbc4def7f6fc535cf82f44643c\"" Oct 9 07:44:11.786827 containerd[1575]: time="2024-10-09T07:44:11.785632016Z" level=info msg="StartContainer for \"4e3087a0b8c7479300bf25660589c2b4a81474cbc4def7f6fc535cf82f44643c\"" Oct 9 07:44:11.956094 containerd[1575]: time="2024-10-09T07:44:11.956034756Z" level=info msg="StartContainer for \"4e3087a0b8c7479300bf25660589c2b4a81474cbc4def7f6fc535cf82f44643c\" returns successfully" Oct 9 07:44:12.665761 systemd-resolved[1467]: Under memory pressure, flushing caches. Oct 9 07:44:12.665794 systemd-resolved[1467]: Flushed all caches. Oct 9 07:44:12.666542 systemd-journald[1128]: Under memory pressure, flushing caches. Oct 9 07:44:13.175452 kubelet[2836]: I1009 07:44:13.175412 2836 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5794c8997c-zjcp4" podStartSLOduration=32.276293039 podStartE2EDuration="37.175173325s" podCreationTimestamp="2024-10-09 07:43:36 +0000 UTC" firstStartedPulling="2024-10-09 07:44:06.827431032 +0000 UTC m=+51.623523684" lastFinishedPulling="2024-10-09 07:44:11.726311328 +0000 UTC m=+56.522403970" observedRunningTime="2024-10-09 07:44:12.049787492 +0000 UTC m=+56.845880144" watchObservedRunningTime="2024-10-09 07:44:13.175173325 +0000 UTC m=+57.971265967" Oct 9 07:44:14.696191 containerd[1575]: time="2024-10-09T07:44:14.695723082Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:44:14.697876 containerd[1575]: time="2024-10-09T07:44:14.697820849Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.1: active requests=0, bytes read=7642081" Oct 9 07:44:14.698353 containerd[1575]: time="2024-10-09T07:44:14.698325726Z" level=info msg="ImageCreate event name:\"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:44:14.701177 containerd[1575]: time="2024-10-09T07:44:14.701146860Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:44:14.702062 containerd[1575]: time="2024-10-09T07:44:14.701932565Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.1\" with image id \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\", size \"9134482\" in 2.975269387s" Oct 9 07:44:14.702062 containerd[1575]: time="2024-10-09T07:44:14.701969915Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\" returns image reference \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\"" Oct 9 07:44:14.705699 containerd[1575]: time="2024-10-09T07:44:14.705601119Z" level=info msg="CreateContainer within sandbox \"267977aa4dc7cb863c8ee034ea6cf34f7852eda514ac9bbcb5898daee9e185c5\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Oct 9 07:44:14.712805 systemd-resolved[1467]: Under memory pressure, flushing caches. Oct 9 07:44:14.712833 systemd-resolved[1467]: Flushed all caches. Oct 9 07:44:14.714636 systemd-journald[1128]: Under memory pressure, flushing caches. Oct 9 07:44:14.737036 containerd[1575]: time="2024-10-09T07:44:14.736935330Z" level=info msg="CreateContainer within sandbox \"267977aa4dc7cb863c8ee034ea6cf34f7852eda514ac9bbcb5898daee9e185c5\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"8a72b5249464c235eee12e2ab49d5f67fed3c718ee58c6b0cf236b543a2693ef\"" Oct 9 07:44:14.738059 containerd[1575]: time="2024-10-09T07:44:14.737891304Z" level=info msg="StartContainer for \"8a72b5249464c235eee12e2ab49d5f67fed3c718ee58c6b0cf236b543a2693ef\"" Oct 9 07:44:14.810801 containerd[1575]: time="2024-10-09T07:44:14.810765749Z" level=info msg="StartContainer for \"8a72b5249464c235eee12e2ab49d5f67fed3c718ee58c6b0cf236b543a2693ef\" returns successfully" Oct 9 07:44:14.812874 containerd[1575]: time="2024-10-09T07:44:14.812847606Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\"" Oct 9 07:44:15.617452 containerd[1575]: time="2024-10-09T07:44:15.616787059Z" level=info msg="StopPodSandbox for \"55a9fbbdf6f49909f0a562ec22933ab6c70f25fcf9d71fb9322f315c8c117886\"" Oct 9 07:44:15.716454 containerd[1575]: 2024-10-09 07:44:15.672 [WARNING][4780] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="55a9fbbdf6f49909f0a562ec22933ab6c70f25fcf9d71fb9322f315c8c117886" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--3--e7db599e29.novalocal-k8s-csi--node--driver--qq7rv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"27440826-f1e0-45d4-b3d8-3225a63f893c", ResourceVersion:"709", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 43, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-3-e7db599e29.novalocal", ContainerID:"267977aa4dc7cb863c8ee034ea6cf34f7852eda514ac9bbcb5898daee9e185c5", Pod:"csi-node-driver-qq7rv", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.74.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali5aa07a2ac32", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:44:15.716454 containerd[1575]: 2024-10-09 07:44:15.672 [INFO][4780] k8s.go 608: Cleaning up netns ContainerID="55a9fbbdf6f49909f0a562ec22933ab6c70f25fcf9d71fb9322f315c8c117886" Oct 9 07:44:15.716454 containerd[1575]: 2024-10-09 07:44:15.672 [INFO][4780] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="55a9fbbdf6f49909f0a562ec22933ab6c70f25fcf9d71fb9322f315c8c117886" iface="eth0" netns="" Oct 9 07:44:15.716454 containerd[1575]: 2024-10-09 07:44:15.673 [INFO][4780] k8s.go 615: Releasing IP address(es) ContainerID="55a9fbbdf6f49909f0a562ec22933ab6c70f25fcf9d71fb9322f315c8c117886" Oct 9 07:44:15.716454 containerd[1575]: 2024-10-09 07:44:15.673 [INFO][4780] utils.go 188: Calico CNI releasing IP address ContainerID="55a9fbbdf6f49909f0a562ec22933ab6c70f25fcf9d71fb9322f315c8c117886" Oct 9 07:44:15.716454 containerd[1575]: 2024-10-09 07:44:15.702 [INFO][4786] ipam_plugin.go 417: Releasing address using handleID ContainerID="55a9fbbdf6f49909f0a562ec22933ab6c70f25fcf9d71fb9322f315c8c117886" HandleID="k8s-pod-network.55a9fbbdf6f49909f0a562ec22933ab6c70f25fcf9d71fb9322f315c8c117886" Workload="ci--3975--2--2--3--e7db599e29.novalocal-k8s-csi--node--driver--qq7rv-eth0" Oct 9 07:44:15.716454 containerd[1575]: 2024-10-09 07:44:15.703 [INFO][4786] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:44:15.716454 containerd[1575]: 2024-10-09 07:44:15.703 [INFO][4786] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:44:15.716454 containerd[1575]: 2024-10-09 07:44:15.710 [WARNING][4786] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="55a9fbbdf6f49909f0a562ec22933ab6c70f25fcf9d71fb9322f315c8c117886" HandleID="k8s-pod-network.55a9fbbdf6f49909f0a562ec22933ab6c70f25fcf9d71fb9322f315c8c117886" Workload="ci--3975--2--2--3--e7db599e29.novalocal-k8s-csi--node--driver--qq7rv-eth0" Oct 9 07:44:15.716454 containerd[1575]: 2024-10-09 07:44:15.710 [INFO][4786] ipam_plugin.go 445: Releasing address using workloadID ContainerID="55a9fbbdf6f49909f0a562ec22933ab6c70f25fcf9d71fb9322f315c8c117886" HandleID="k8s-pod-network.55a9fbbdf6f49909f0a562ec22933ab6c70f25fcf9d71fb9322f315c8c117886" Workload="ci--3975--2--2--3--e7db599e29.novalocal-k8s-csi--node--driver--qq7rv-eth0" Oct 9 07:44:15.716454 containerd[1575]: 2024-10-09 07:44:15.713 [INFO][4786] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:44:15.716454 containerd[1575]: 2024-10-09 07:44:15.714 [INFO][4780] k8s.go 621: Teardown processing complete. ContainerID="55a9fbbdf6f49909f0a562ec22933ab6c70f25fcf9d71fb9322f315c8c117886" Oct 9 07:44:15.719773 containerd[1575]: time="2024-10-09T07:44:15.716492937Z" level=info msg="TearDown network for sandbox \"55a9fbbdf6f49909f0a562ec22933ab6c70f25fcf9d71fb9322f315c8c117886\" successfully" Oct 9 07:44:15.719773 containerd[1575]: time="2024-10-09T07:44:15.716517373Z" level=info msg="StopPodSandbox for \"55a9fbbdf6f49909f0a562ec22933ab6c70f25fcf9d71fb9322f315c8c117886\" returns successfully" Oct 9 07:44:15.721251 containerd[1575]: time="2024-10-09T07:44:15.721224577Z" level=info msg="RemovePodSandbox for \"55a9fbbdf6f49909f0a562ec22933ab6c70f25fcf9d71fb9322f315c8c117886\"" Oct 9 07:44:15.721309 containerd[1575]: time="2024-10-09T07:44:15.721266386Z" level=info msg="Forcibly stopping sandbox \"55a9fbbdf6f49909f0a562ec22933ab6c70f25fcf9d71fb9322f315c8c117886\"" Oct 9 07:44:15.799159 containerd[1575]: 2024-10-09 07:44:15.764 [WARNING][4806] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="55a9fbbdf6f49909f0a562ec22933ab6c70f25fcf9d71fb9322f315c8c117886" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--3--e7db599e29.novalocal-k8s-csi--node--driver--qq7rv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"27440826-f1e0-45d4-b3d8-3225a63f893c", ResourceVersion:"709", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 43, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-3-e7db599e29.novalocal", ContainerID:"267977aa4dc7cb863c8ee034ea6cf34f7852eda514ac9bbcb5898daee9e185c5", Pod:"csi-node-driver-qq7rv", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.74.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali5aa07a2ac32", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:44:15.799159 containerd[1575]: 2024-10-09 07:44:15.764 [INFO][4806] k8s.go 608: Cleaning up netns ContainerID="55a9fbbdf6f49909f0a562ec22933ab6c70f25fcf9d71fb9322f315c8c117886" Oct 9 07:44:15.799159 containerd[1575]: 2024-10-09 07:44:15.764 [INFO][4806] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="55a9fbbdf6f49909f0a562ec22933ab6c70f25fcf9d71fb9322f315c8c117886" iface="eth0" netns="" Oct 9 07:44:15.799159 containerd[1575]: 2024-10-09 07:44:15.764 [INFO][4806] k8s.go 615: Releasing IP address(es) ContainerID="55a9fbbdf6f49909f0a562ec22933ab6c70f25fcf9d71fb9322f315c8c117886" Oct 9 07:44:15.799159 containerd[1575]: 2024-10-09 07:44:15.764 [INFO][4806] utils.go 188: Calico CNI releasing IP address ContainerID="55a9fbbdf6f49909f0a562ec22933ab6c70f25fcf9d71fb9322f315c8c117886" Oct 9 07:44:15.799159 containerd[1575]: 2024-10-09 07:44:15.786 [INFO][4812] ipam_plugin.go 417: Releasing address using handleID ContainerID="55a9fbbdf6f49909f0a562ec22933ab6c70f25fcf9d71fb9322f315c8c117886" HandleID="k8s-pod-network.55a9fbbdf6f49909f0a562ec22933ab6c70f25fcf9d71fb9322f315c8c117886" Workload="ci--3975--2--2--3--e7db599e29.novalocal-k8s-csi--node--driver--qq7rv-eth0" Oct 9 07:44:15.799159 containerd[1575]: 2024-10-09 07:44:15.786 [INFO][4812] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:44:15.799159 containerd[1575]: 2024-10-09 07:44:15.786 [INFO][4812] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:44:15.799159 containerd[1575]: 2024-10-09 07:44:15.793 [WARNING][4812] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="55a9fbbdf6f49909f0a562ec22933ab6c70f25fcf9d71fb9322f315c8c117886" HandleID="k8s-pod-network.55a9fbbdf6f49909f0a562ec22933ab6c70f25fcf9d71fb9322f315c8c117886" Workload="ci--3975--2--2--3--e7db599e29.novalocal-k8s-csi--node--driver--qq7rv-eth0" Oct 9 07:44:15.799159 containerd[1575]: 2024-10-09 07:44:15.793 [INFO][4812] ipam_plugin.go 445: Releasing address using workloadID ContainerID="55a9fbbdf6f49909f0a562ec22933ab6c70f25fcf9d71fb9322f315c8c117886" HandleID="k8s-pod-network.55a9fbbdf6f49909f0a562ec22933ab6c70f25fcf9d71fb9322f315c8c117886" Workload="ci--3975--2--2--3--e7db599e29.novalocal-k8s-csi--node--driver--qq7rv-eth0" Oct 9 07:44:15.799159 containerd[1575]: 2024-10-09 07:44:15.795 [INFO][4812] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:44:15.799159 containerd[1575]: 2024-10-09 07:44:15.796 [INFO][4806] k8s.go 621: Teardown processing complete. ContainerID="55a9fbbdf6f49909f0a562ec22933ab6c70f25fcf9d71fb9322f315c8c117886" Oct 9 07:44:15.799159 containerd[1575]: time="2024-10-09T07:44:15.798048883Z" level=info msg="TearDown network for sandbox \"55a9fbbdf6f49909f0a562ec22933ab6c70f25fcf9d71fb9322f315c8c117886\" successfully" Oct 9 07:44:15.808458 containerd[1575]: time="2024-10-09T07:44:15.808425387Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"55a9fbbdf6f49909f0a562ec22933ab6c70f25fcf9d71fb9322f315c8c117886\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 07:44:15.808606 containerd[1575]: time="2024-10-09T07:44:15.808587852Z" level=info msg="RemovePodSandbox \"55a9fbbdf6f49909f0a562ec22933ab6c70f25fcf9d71fb9322f315c8c117886\" returns successfully" Oct 9 07:44:15.809091 containerd[1575]: time="2024-10-09T07:44:15.809071749Z" level=info msg="StopPodSandbox for \"18b3e3f3469aa95a5537c35318081ddbb84ac466da19ad19c3d15adf1062c69b\"" Oct 9 07:44:15.881624 containerd[1575]: 2024-10-09 07:44:15.845 [WARNING][4831] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="18b3e3f3469aa95a5537c35318081ddbb84ac466da19ad19c3d15adf1062c69b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--3--e7db599e29.novalocal-k8s-coredns--76f75df574--swjxh-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"5c1237f3-306f-4b76-aa33-787e19b1ef8a", ResourceVersion:"727", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 43, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-3-e7db599e29.novalocal", ContainerID:"d32825426b2a4824835f8a42b5787dc690e269c86357a6620d4c5c5ec2fb8ef9", Pod:"coredns-76f75df574-swjxh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.74.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali88e70e179be", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:44:15.881624 containerd[1575]: 2024-10-09 07:44:15.845 [INFO][4831] k8s.go 608: Cleaning up netns ContainerID="18b3e3f3469aa95a5537c35318081ddbb84ac466da19ad19c3d15adf1062c69b" Oct 9 07:44:15.881624 containerd[1575]: 2024-10-09 07:44:15.845 [INFO][4831] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="18b3e3f3469aa95a5537c35318081ddbb84ac466da19ad19c3d15adf1062c69b" iface="eth0" netns="" Oct 9 07:44:15.881624 containerd[1575]: 2024-10-09 07:44:15.845 [INFO][4831] k8s.go 615: Releasing IP address(es) ContainerID="18b3e3f3469aa95a5537c35318081ddbb84ac466da19ad19c3d15adf1062c69b" Oct 9 07:44:15.881624 containerd[1575]: 2024-10-09 07:44:15.845 [INFO][4831] utils.go 188: Calico CNI releasing IP address ContainerID="18b3e3f3469aa95a5537c35318081ddbb84ac466da19ad19c3d15adf1062c69b" Oct 9 07:44:15.881624 containerd[1575]: 2024-10-09 07:44:15.867 [INFO][4837] ipam_plugin.go 417: Releasing address using handleID ContainerID="18b3e3f3469aa95a5537c35318081ddbb84ac466da19ad19c3d15adf1062c69b" HandleID="k8s-pod-network.18b3e3f3469aa95a5537c35318081ddbb84ac466da19ad19c3d15adf1062c69b" Workload="ci--3975--2--2--3--e7db599e29.novalocal-k8s-coredns--76f75df574--swjxh-eth0" Oct 9 07:44:15.881624 containerd[1575]: 2024-10-09 07:44:15.867 [INFO][4837] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:44:15.881624 containerd[1575]: 2024-10-09 07:44:15.867 [INFO][4837] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:44:15.881624 containerd[1575]: 2024-10-09 07:44:15.875 [WARNING][4837] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="18b3e3f3469aa95a5537c35318081ddbb84ac466da19ad19c3d15adf1062c69b" HandleID="k8s-pod-network.18b3e3f3469aa95a5537c35318081ddbb84ac466da19ad19c3d15adf1062c69b" Workload="ci--3975--2--2--3--e7db599e29.novalocal-k8s-coredns--76f75df574--swjxh-eth0" Oct 9 07:44:15.881624 containerd[1575]: 2024-10-09 07:44:15.875 [INFO][4837] ipam_plugin.go 445: Releasing address using workloadID ContainerID="18b3e3f3469aa95a5537c35318081ddbb84ac466da19ad19c3d15adf1062c69b" HandleID="k8s-pod-network.18b3e3f3469aa95a5537c35318081ddbb84ac466da19ad19c3d15adf1062c69b" Workload="ci--3975--2--2--3--e7db599e29.novalocal-k8s-coredns--76f75df574--swjxh-eth0" Oct 9 07:44:15.881624 containerd[1575]: 2024-10-09 07:44:15.878 [INFO][4837] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:44:15.881624 containerd[1575]: 2024-10-09 07:44:15.880 [INFO][4831] k8s.go 621: Teardown processing complete. ContainerID="18b3e3f3469aa95a5537c35318081ddbb84ac466da19ad19c3d15adf1062c69b" Oct 9 07:44:15.881624 containerd[1575]: time="2024-10-09T07:44:15.881587991Z" level=info msg="TearDown network for sandbox \"18b3e3f3469aa95a5537c35318081ddbb84ac466da19ad19c3d15adf1062c69b\" successfully" Oct 9 07:44:15.883265 containerd[1575]: time="2024-10-09T07:44:15.881625862Z" level=info msg="StopPodSandbox for \"18b3e3f3469aa95a5537c35318081ddbb84ac466da19ad19c3d15adf1062c69b\" returns successfully" Oct 9 07:44:15.883843 containerd[1575]: time="2024-10-09T07:44:15.883477066Z" level=info msg="RemovePodSandbox for \"18b3e3f3469aa95a5537c35318081ddbb84ac466da19ad19c3d15adf1062c69b\"" Oct 9 07:44:15.883843 containerd[1575]: time="2024-10-09T07:44:15.883517923Z" level=info msg="Forcibly stopping sandbox \"18b3e3f3469aa95a5537c35318081ddbb84ac466da19ad19c3d15adf1062c69b\"" Oct 9 07:44:15.951120 containerd[1575]: 2024-10-09 07:44:15.918 [WARNING][4855] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="18b3e3f3469aa95a5537c35318081ddbb84ac466da19ad19c3d15adf1062c69b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--3--e7db599e29.novalocal-k8s-coredns--76f75df574--swjxh-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"5c1237f3-306f-4b76-aa33-787e19b1ef8a", ResourceVersion:"727", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 43, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-3-e7db599e29.novalocal", ContainerID:"d32825426b2a4824835f8a42b5787dc690e269c86357a6620d4c5c5ec2fb8ef9", Pod:"coredns-76f75df574-swjxh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.74.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali88e70e179be", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:44:15.951120 containerd[1575]: 2024-10-09 07:44:15.919 [INFO][4855] k8s.go 608: Cleaning up netns ContainerID="18b3e3f3469aa95a5537c35318081ddbb84ac466da19ad19c3d15adf1062c69b" Oct 9 07:44:15.951120 containerd[1575]: 2024-10-09 07:44:15.919 [INFO][4855] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="18b3e3f3469aa95a5537c35318081ddbb84ac466da19ad19c3d15adf1062c69b" iface="eth0" netns="" Oct 9 07:44:15.951120 containerd[1575]: 2024-10-09 07:44:15.919 [INFO][4855] k8s.go 615: Releasing IP address(es) ContainerID="18b3e3f3469aa95a5537c35318081ddbb84ac466da19ad19c3d15adf1062c69b" Oct 9 07:44:15.951120 containerd[1575]: 2024-10-09 07:44:15.919 [INFO][4855] utils.go 188: Calico CNI releasing IP address ContainerID="18b3e3f3469aa95a5537c35318081ddbb84ac466da19ad19c3d15adf1062c69b" Oct 9 07:44:15.951120 containerd[1575]: 2024-10-09 07:44:15.940 [INFO][4861] ipam_plugin.go 417: Releasing address using handleID ContainerID="18b3e3f3469aa95a5537c35318081ddbb84ac466da19ad19c3d15adf1062c69b" HandleID="k8s-pod-network.18b3e3f3469aa95a5537c35318081ddbb84ac466da19ad19c3d15adf1062c69b" Workload="ci--3975--2--2--3--e7db599e29.novalocal-k8s-coredns--76f75df574--swjxh-eth0" Oct 9 07:44:15.951120 containerd[1575]: 2024-10-09 07:44:15.940 [INFO][4861] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:44:15.951120 containerd[1575]: 2024-10-09 07:44:15.940 [INFO][4861] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:44:15.951120 containerd[1575]: 2024-10-09 07:44:15.946 [WARNING][4861] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="18b3e3f3469aa95a5537c35318081ddbb84ac466da19ad19c3d15adf1062c69b" HandleID="k8s-pod-network.18b3e3f3469aa95a5537c35318081ddbb84ac466da19ad19c3d15adf1062c69b" Workload="ci--3975--2--2--3--e7db599e29.novalocal-k8s-coredns--76f75df574--swjxh-eth0" Oct 9 07:44:15.951120 containerd[1575]: 2024-10-09 07:44:15.947 [INFO][4861] ipam_plugin.go 445: Releasing address using workloadID ContainerID="18b3e3f3469aa95a5537c35318081ddbb84ac466da19ad19c3d15adf1062c69b" HandleID="k8s-pod-network.18b3e3f3469aa95a5537c35318081ddbb84ac466da19ad19c3d15adf1062c69b" Workload="ci--3975--2--2--3--e7db599e29.novalocal-k8s-coredns--76f75df574--swjxh-eth0" Oct 9 07:44:15.951120 containerd[1575]: 2024-10-09 07:44:15.948 [INFO][4861] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:44:15.951120 containerd[1575]: 2024-10-09 07:44:15.949 [INFO][4855] k8s.go 621: Teardown processing complete. ContainerID="18b3e3f3469aa95a5537c35318081ddbb84ac466da19ad19c3d15adf1062c69b" Oct 9 07:44:15.952180 containerd[1575]: time="2024-10-09T07:44:15.951616353Z" level=info msg="TearDown network for sandbox \"18b3e3f3469aa95a5537c35318081ddbb84ac466da19ad19c3d15adf1062c69b\" successfully" Oct 9 07:44:15.955478 containerd[1575]: time="2024-10-09T07:44:15.955344961Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"18b3e3f3469aa95a5537c35318081ddbb84ac466da19ad19c3d15adf1062c69b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 07:44:15.955478 containerd[1575]: time="2024-10-09T07:44:15.955404774Z" level=info msg="RemovePodSandbox \"18b3e3f3469aa95a5537c35318081ddbb84ac466da19ad19c3d15adf1062c69b\" returns successfully" Oct 9 07:44:15.956162 containerd[1575]: time="2024-10-09T07:44:15.955949335Z" level=info msg="StopPodSandbox for \"7fa5396b2d335ed64fc25b8f185d8f59fcbfb2aa16b20fc01aa95073b2aa8a05\"" Oct 9 07:44:16.029705 containerd[1575]: 2024-10-09 07:44:15.992 [WARNING][4879] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7fa5396b2d335ed64fc25b8f185d8f59fcbfb2aa16b20fc01aa95073b2aa8a05" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--3--e7db599e29.novalocal-k8s-calico--kube--controllers--5794c8997c--zjcp4-eth0", GenerateName:"calico-kube-controllers-5794c8997c-", Namespace:"calico-system", SelfLink:"", UID:"02e73bd3-d833-4c42-ada0-6af82baf1ace", ResourceVersion:"753", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 43, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5794c8997c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-3-e7db599e29.novalocal", ContainerID:"151846675a1a8e08650d42cce80f990b8059199e543261e2f13c7b166362edd5", Pod:"calico-kube-controllers-5794c8997c-zjcp4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.74.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2f886da080d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:44:16.029705 containerd[1575]: 2024-10-09 07:44:15.993 [INFO][4879] k8s.go 608: Cleaning up netns ContainerID="7fa5396b2d335ed64fc25b8f185d8f59fcbfb2aa16b20fc01aa95073b2aa8a05" Oct 9 07:44:16.029705 containerd[1575]: 2024-10-09 07:44:15.993 [INFO][4879] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="7fa5396b2d335ed64fc25b8f185d8f59fcbfb2aa16b20fc01aa95073b2aa8a05" iface="eth0" netns="" Oct 9 07:44:16.029705 containerd[1575]: 2024-10-09 07:44:15.993 [INFO][4879] k8s.go 615: Releasing IP address(es) ContainerID="7fa5396b2d335ed64fc25b8f185d8f59fcbfb2aa16b20fc01aa95073b2aa8a05" Oct 9 07:44:16.029705 containerd[1575]: 2024-10-09 07:44:15.993 [INFO][4879] utils.go 188: Calico CNI releasing IP address ContainerID="7fa5396b2d335ed64fc25b8f185d8f59fcbfb2aa16b20fc01aa95073b2aa8a05" Oct 9 07:44:16.029705 containerd[1575]: 2024-10-09 07:44:16.015 [INFO][4885] ipam_plugin.go 417: Releasing address using handleID ContainerID="7fa5396b2d335ed64fc25b8f185d8f59fcbfb2aa16b20fc01aa95073b2aa8a05" HandleID="k8s-pod-network.7fa5396b2d335ed64fc25b8f185d8f59fcbfb2aa16b20fc01aa95073b2aa8a05" Workload="ci--3975--2--2--3--e7db599e29.novalocal-k8s-calico--kube--controllers--5794c8997c--zjcp4-eth0" Oct 9 07:44:16.029705 containerd[1575]: 2024-10-09 07:44:16.015 [INFO][4885] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:44:16.029705 containerd[1575]: 2024-10-09 07:44:16.015 [INFO][4885] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:44:16.029705 containerd[1575]: 2024-10-09 07:44:16.025 [WARNING][4885] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="7fa5396b2d335ed64fc25b8f185d8f59fcbfb2aa16b20fc01aa95073b2aa8a05" HandleID="k8s-pod-network.7fa5396b2d335ed64fc25b8f185d8f59fcbfb2aa16b20fc01aa95073b2aa8a05" Workload="ci--3975--2--2--3--e7db599e29.novalocal-k8s-calico--kube--controllers--5794c8997c--zjcp4-eth0" Oct 9 07:44:16.029705 containerd[1575]: 2024-10-09 07:44:16.025 [INFO][4885] ipam_plugin.go 445: Releasing address using workloadID ContainerID="7fa5396b2d335ed64fc25b8f185d8f59fcbfb2aa16b20fc01aa95073b2aa8a05" HandleID="k8s-pod-network.7fa5396b2d335ed64fc25b8f185d8f59fcbfb2aa16b20fc01aa95073b2aa8a05" Workload="ci--3975--2--2--3--e7db599e29.novalocal-k8s-calico--kube--controllers--5794c8997c--zjcp4-eth0" Oct 9 07:44:16.029705 containerd[1575]: 2024-10-09 07:44:16.027 [INFO][4885] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:44:16.029705 containerd[1575]: 2024-10-09 07:44:16.028 [INFO][4879] k8s.go 621: Teardown processing complete. ContainerID="7fa5396b2d335ed64fc25b8f185d8f59fcbfb2aa16b20fc01aa95073b2aa8a05" Oct 9 07:44:16.030697 containerd[1575]: time="2024-10-09T07:44:16.030207198Z" level=info msg="TearDown network for sandbox \"7fa5396b2d335ed64fc25b8f185d8f59fcbfb2aa16b20fc01aa95073b2aa8a05\" successfully" Oct 9 07:44:16.030697 containerd[1575]: time="2024-10-09T07:44:16.030257122Z" level=info msg="StopPodSandbox for \"7fa5396b2d335ed64fc25b8f185d8f59fcbfb2aa16b20fc01aa95073b2aa8a05\" returns successfully" Oct 9 07:44:16.031188 containerd[1575]: time="2024-10-09T07:44:16.031065379Z" level=info msg="RemovePodSandbox for \"7fa5396b2d335ed64fc25b8f185d8f59fcbfb2aa16b20fc01aa95073b2aa8a05\"" Oct 9 07:44:16.031188 containerd[1575]: time="2024-10-09T07:44:16.031112938Z" level=info msg="Forcibly stopping sandbox \"7fa5396b2d335ed64fc25b8f185d8f59fcbfb2aa16b20fc01aa95073b2aa8a05\"" Oct 9 07:44:16.127541 containerd[1575]: 2024-10-09 07:44:16.083 [WARNING][4903] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7fa5396b2d335ed64fc25b8f185d8f59fcbfb2aa16b20fc01aa95073b2aa8a05" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--3--e7db599e29.novalocal-k8s-calico--kube--controllers--5794c8997c--zjcp4-eth0", GenerateName:"calico-kube-controllers-5794c8997c-", Namespace:"calico-system", SelfLink:"", UID:"02e73bd3-d833-4c42-ada0-6af82baf1ace", ResourceVersion:"753", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 43, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5794c8997c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-3-e7db599e29.novalocal", ContainerID:"151846675a1a8e08650d42cce80f990b8059199e543261e2f13c7b166362edd5", Pod:"calico-kube-controllers-5794c8997c-zjcp4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.74.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2f886da080d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:44:16.127541 containerd[1575]: 2024-10-09 07:44:16.083 [INFO][4903] k8s.go 608: Cleaning up netns ContainerID="7fa5396b2d335ed64fc25b8f185d8f59fcbfb2aa16b20fc01aa95073b2aa8a05" Oct 9 07:44:16.127541 containerd[1575]: 2024-10-09 07:44:16.083 [INFO][4903] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="7fa5396b2d335ed64fc25b8f185d8f59fcbfb2aa16b20fc01aa95073b2aa8a05" iface="eth0" netns="" Oct 9 07:44:16.127541 containerd[1575]: 2024-10-09 07:44:16.083 [INFO][4903] k8s.go 615: Releasing IP address(es) ContainerID="7fa5396b2d335ed64fc25b8f185d8f59fcbfb2aa16b20fc01aa95073b2aa8a05" Oct 9 07:44:16.127541 containerd[1575]: 2024-10-09 07:44:16.083 [INFO][4903] utils.go 188: Calico CNI releasing IP address ContainerID="7fa5396b2d335ed64fc25b8f185d8f59fcbfb2aa16b20fc01aa95073b2aa8a05" Oct 9 07:44:16.127541 containerd[1575]: 2024-10-09 07:44:16.112 [INFO][4909] ipam_plugin.go 417: Releasing address using handleID ContainerID="7fa5396b2d335ed64fc25b8f185d8f59fcbfb2aa16b20fc01aa95073b2aa8a05" HandleID="k8s-pod-network.7fa5396b2d335ed64fc25b8f185d8f59fcbfb2aa16b20fc01aa95073b2aa8a05" Workload="ci--3975--2--2--3--e7db599e29.novalocal-k8s-calico--kube--controllers--5794c8997c--zjcp4-eth0" Oct 9 07:44:16.127541 containerd[1575]: 2024-10-09 07:44:16.112 [INFO][4909] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:44:16.127541 containerd[1575]: 2024-10-09 07:44:16.112 [INFO][4909] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:44:16.127541 containerd[1575]: 2024-10-09 07:44:16.121 [WARNING][4909] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="7fa5396b2d335ed64fc25b8f185d8f59fcbfb2aa16b20fc01aa95073b2aa8a05" HandleID="k8s-pod-network.7fa5396b2d335ed64fc25b8f185d8f59fcbfb2aa16b20fc01aa95073b2aa8a05" Workload="ci--3975--2--2--3--e7db599e29.novalocal-k8s-calico--kube--controllers--5794c8997c--zjcp4-eth0" Oct 9 07:44:16.127541 containerd[1575]: 2024-10-09 07:44:16.121 [INFO][4909] ipam_plugin.go 445: Releasing address using workloadID ContainerID="7fa5396b2d335ed64fc25b8f185d8f59fcbfb2aa16b20fc01aa95073b2aa8a05" HandleID="k8s-pod-network.7fa5396b2d335ed64fc25b8f185d8f59fcbfb2aa16b20fc01aa95073b2aa8a05" Workload="ci--3975--2--2--3--e7db599e29.novalocal-k8s-calico--kube--controllers--5794c8997c--zjcp4-eth0" Oct 9 07:44:16.127541 containerd[1575]: 2024-10-09 07:44:16.124 [INFO][4909] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:44:16.127541 containerd[1575]: 2024-10-09 07:44:16.126 [INFO][4903] k8s.go 621: Teardown processing complete. ContainerID="7fa5396b2d335ed64fc25b8f185d8f59fcbfb2aa16b20fc01aa95073b2aa8a05" Oct 9 07:44:16.128331 containerd[1575]: time="2024-10-09T07:44:16.127519997Z" level=info msg="TearDown network for sandbox \"7fa5396b2d335ed64fc25b8f185d8f59fcbfb2aa16b20fc01aa95073b2aa8a05\" successfully" Oct 9 07:44:16.131395 containerd[1575]: time="2024-10-09T07:44:16.131367158Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7fa5396b2d335ed64fc25b8f185d8f59fcbfb2aa16b20fc01aa95073b2aa8a05\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 07:44:16.132648 containerd[1575]: time="2024-10-09T07:44:16.131515806Z" level=info msg="RemovePodSandbox \"7fa5396b2d335ed64fc25b8f185d8f59fcbfb2aa16b20fc01aa95073b2aa8a05\" returns successfully" Oct 9 07:44:16.132648 containerd[1575]: time="2024-10-09T07:44:16.131912019Z" level=info msg="StopPodSandbox for \"84624294e1c71732ae60bf3020ea314b3599d3be22a97eafb851758ab98dea88\"" Oct 9 07:44:16.208695 containerd[1575]: 2024-10-09 07:44:16.173 [WARNING][4929] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="84624294e1c71732ae60bf3020ea314b3599d3be22a97eafb851758ab98dea88" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--3--e7db599e29.novalocal-k8s-coredns--76f75df574--wxf26-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"9094bb4d-511f-44a7-8e94-3160d3cf1a33", ResourceVersion:"730", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 43, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-3-e7db599e29.novalocal", ContainerID:"7bfba5c1834813295ac6ab0fac1fefb25ddd9f56ef30d9fb2f845b0ce0e2dd74", Pod:"coredns-76f75df574-wxf26", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.74.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali892fd090b9a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:44:16.208695 containerd[1575]: 2024-10-09 07:44:16.173 [INFO][4929] k8s.go 608: Cleaning up netns ContainerID="84624294e1c71732ae60bf3020ea314b3599d3be22a97eafb851758ab98dea88" Oct 9 07:44:16.208695 containerd[1575]: 2024-10-09 07:44:16.173 [INFO][4929] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="84624294e1c71732ae60bf3020ea314b3599d3be22a97eafb851758ab98dea88" iface="eth0" netns="" Oct 9 07:44:16.208695 containerd[1575]: 2024-10-09 07:44:16.173 [INFO][4929] k8s.go 615: Releasing IP address(es) ContainerID="84624294e1c71732ae60bf3020ea314b3599d3be22a97eafb851758ab98dea88" Oct 9 07:44:16.208695 containerd[1575]: 2024-10-09 07:44:16.173 [INFO][4929] utils.go 188: Calico CNI releasing IP address ContainerID="84624294e1c71732ae60bf3020ea314b3599d3be22a97eafb851758ab98dea88" Oct 9 07:44:16.208695 containerd[1575]: 2024-10-09 07:44:16.196 [INFO][4935] ipam_plugin.go 417: Releasing address using handleID ContainerID="84624294e1c71732ae60bf3020ea314b3599d3be22a97eafb851758ab98dea88" HandleID="k8s-pod-network.84624294e1c71732ae60bf3020ea314b3599d3be22a97eafb851758ab98dea88" Workload="ci--3975--2--2--3--e7db599e29.novalocal-k8s-coredns--76f75df574--wxf26-eth0" Oct 9 07:44:16.208695 containerd[1575]: 2024-10-09 07:44:16.197 [INFO][4935] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:44:16.208695 containerd[1575]: 2024-10-09 07:44:16.197 [INFO][4935] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:44:16.208695 containerd[1575]: 2024-10-09 07:44:16.204 [WARNING][4935] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="84624294e1c71732ae60bf3020ea314b3599d3be22a97eafb851758ab98dea88" HandleID="k8s-pod-network.84624294e1c71732ae60bf3020ea314b3599d3be22a97eafb851758ab98dea88" Workload="ci--3975--2--2--3--e7db599e29.novalocal-k8s-coredns--76f75df574--wxf26-eth0" Oct 9 07:44:16.208695 containerd[1575]: 2024-10-09 07:44:16.204 [INFO][4935] ipam_plugin.go 445: Releasing address using workloadID ContainerID="84624294e1c71732ae60bf3020ea314b3599d3be22a97eafb851758ab98dea88" HandleID="k8s-pod-network.84624294e1c71732ae60bf3020ea314b3599d3be22a97eafb851758ab98dea88" Workload="ci--3975--2--2--3--e7db599e29.novalocal-k8s-coredns--76f75df574--wxf26-eth0" Oct 9 07:44:16.208695 containerd[1575]: 2024-10-09 07:44:16.206 [INFO][4935] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:44:16.208695 containerd[1575]: 2024-10-09 07:44:16.207 [INFO][4929] k8s.go 621: Teardown processing complete. ContainerID="84624294e1c71732ae60bf3020ea314b3599d3be22a97eafb851758ab98dea88" Oct 9 07:44:16.209244 containerd[1575]: time="2024-10-09T07:44:16.208754549Z" level=info msg="TearDown network for sandbox \"84624294e1c71732ae60bf3020ea314b3599d3be22a97eafb851758ab98dea88\" successfully" Oct 9 07:44:16.209244 containerd[1575]: time="2024-10-09T07:44:16.208782742Z" level=info msg="StopPodSandbox for \"84624294e1c71732ae60bf3020ea314b3599d3be22a97eafb851758ab98dea88\" returns successfully" Oct 9 07:44:16.209359 containerd[1575]: time="2024-10-09T07:44:16.209319851Z" level=info msg="RemovePodSandbox for \"84624294e1c71732ae60bf3020ea314b3599d3be22a97eafb851758ab98dea88\"" Oct 9 07:44:16.209359 containerd[1575]: time="2024-10-09T07:44:16.209346450Z" level=info msg="Forcibly stopping sandbox \"84624294e1c71732ae60bf3020ea314b3599d3be22a97eafb851758ab98dea88\"" Oct 9 07:44:16.280581 containerd[1575]: 2024-10-09 07:44:16.248 [WARNING][4958] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="84624294e1c71732ae60bf3020ea314b3599d3be22a97eafb851758ab98dea88" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--3--e7db599e29.novalocal-k8s-coredns--76f75df574--wxf26-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"9094bb4d-511f-44a7-8e94-3160d3cf1a33", ResourceVersion:"730", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 43, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-3-e7db599e29.novalocal", ContainerID:"7bfba5c1834813295ac6ab0fac1fefb25ddd9f56ef30d9fb2f845b0ce0e2dd74", Pod:"coredns-76f75df574-wxf26", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.74.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali892fd090b9a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:44:16.280581 containerd[1575]: 2024-10-09 07:44:16.248 [INFO][4958] k8s.go 608: Cleaning up netns ContainerID="84624294e1c71732ae60bf3020ea314b3599d3be22a97eafb851758ab98dea88" Oct 9 07:44:16.280581 containerd[1575]: 2024-10-09 07:44:16.248 [INFO][4958] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="84624294e1c71732ae60bf3020ea314b3599d3be22a97eafb851758ab98dea88" iface="eth0" netns="" Oct 9 07:44:16.280581 containerd[1575]: 2024-10-09 07:44:16.248 [INFO][4958] k8s.go 615: Releasing IP address(es) ContainerID="84624294e1c71732ae60bf3020ea314b3599d3be22a97eafb851758ab98dea88" Oct 9 07:44:16.280581 containerd[1575]: 2024-10-09 07:44:16.248 [INFO][4958] utils.go 188: Calico CNI releasing IP address ContainerID="84624294e1c71732ae60bf3020ea314b3599d3be22a97eafb851758ab98dea88" Oct 9 07:44:16.280581 containerd[1575]: 2024-10-09 07:44:16.269 [INFO][4964] ipam_plugin.go 417: Releasing address using handleID ContainerID="84624294e1c71732ae60bf3020ea314b3599d3be22a97eafb851758ab98dea88" HandleID="k8s-pod-network.84624294e1c71732ae60bf3020ea314b3599d3be22a97eafb851758ab98dea88" Workload="ci--3975--2--2--3--e7db599e29.novalocal-k8s-coredns--76f75df574--wxf26-eth0" Oct 9 07:44:16.280581 containerd[1575]: 2024-10-09 07:44:16.269 [INFO][4964] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:44:16.280581 containerd[1575]: 2024-10-09 07:44:16.269 [INFO][4964] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:44:16.280581 containerd[1575]: 2024-10-09 07:44:16.275 [WARNING][4964] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="84624294e1c71732ae60bf3020ea314b3599d3be22a97eafb851758ab98dea88" HandleID="k8s-pod-network.84624294e1c71732ae60bf3020ea314b3599d3be22a97eafb851758ab98dea88" Workload="ci--3975--2--2--3--e7db599e29.novalocal-k8s-coredns--76f75df574--wxf26-eth0" Oct 9 07:44:16.280581 containerd[1575]: 2024-10-09 07:44:16.276 [INFO][4964] ipam_plugin.go 445: Releasing address using workloadID ContainerID="84624294e1c71732ae60bf3020ea314b3599d3be22a97eafb851758ab98dea88" HandleID="k8s-pod-network.84624294e1c71732ae60bf3020ea314b3599d3be22a97eafb851758ab98dea88" Workload="ci--3975--2--2--3--e7db599e29.novalocal-k8s-coredns--76f75df574--wxf26-eth0" Oct 9 07:44:16.280581 containerd[1575]: 2024-10-09 07:44:16.277 [INFO][4964] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:44:16.280581 containerd[1575]: 2024-10-09 07:44:16.279 [INFO][4958] k8s.go 621: Teardown processing complete. ContainerID="84624294e1c71732ae60bf3020ea314b3599d3be22a97eafb851758ab98dea88" Oct 9 07:44:16.280581 containerd[1575]: time="2024-10-09T07:44:16.280301139Z" level=info msg="TearDown network for sandbox \"84624294e1c71732ae60bf3020ea314b3599d3be22a97eafb851758ab98dea88\" successfully" Oct 9 07:44:16.284897 containerd[1575]: time="2024-10-09T07:44:16.284655241Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"84624294e1c71732ae60bf3020ea314b3599d3be22a97eafb851758ab98dea88\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 07:44:16.284897 containerd[1575]: time="2024-10-09T07:44:16.284767061Z" level=info msg="RemovePodSandbox \"84624294e1c71732ae60bf3020ea314b3599d3be22a97eafb851758ab98dea88\" returns successfully" Oct 9 07:44:16.960463 containerd[1575]: time="2024-10-09T07:44:16.960397536Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:44:16.961511 containerd[1575]: time="2024-10-09T07:44:16.961467594Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1: active requests=0, bytes read=12907822" Oct 9 07:44:16.963209 containerd[1575]: time="2024-10-09T07:44:16.962844448Z" level=info msg="ImageCreate event name:\"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:44:16.966318 containerd[1575]: time="2024-10-09T07:44:16.966277331Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:44:16.967417 containerd[1575]: time="2024-10-09T07:44:16.967371374Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" with image id \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\", size \"14400175\" in 2.154435963s" Oct 9 07:44:16.967517 containerd[1575]: time="2024-10-09T07:44:16.967497270Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" returns image reference \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\"" Oct 9 07:44:16.970494 containerd[1575]: time="2024-10-09T07:44:16.970449451Z" level=info msg="CreateContainer within sandbox \"267977aa4dc7cb863c8ee034ea6cf34f7852eda514ac9bbcb5898daee9e185c5\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Oct 9 07:44:16.985750 containerd[1575]: time="2024-10-09T07:44:16.985687700Z" level=info msg="CreateContainer within sandbox \"267977aa4dc7cb863c8ee034ea6cf34f7852eda514ac9bbcb5898daee9e185c5\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"a52e01ee553a1ce7457587afc062a13de424ad0a8dba97680feb974a434aaf4c\"" Oct 9 07:44:16.987897 containerd[1575]: time="2024-10-09T07:44:16.987656125Z" level=info msg="StartContainer for \"a52e01ee553a1ce7457587afc062a13de424ad0a8dba97680feb974a434aaf4c\"" Oct 9 07:44:17.055561 containerd[1575]: time="2024-10-09T07:44:17.054695431Z" level=info msg="StartContainer for \"a52e01ee553a1ce7457587afc062a13de424ad0a8dba97680feb974a434aaf4c\" returns successfully" Oct 9 07:44:17.818878 kubelet[2836]: I1009 07:44:17.818805 2836 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Oct 9 07:44:17.824490 kubelet[2836]: I1009 07:44:17.824418 2836 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Oct 9 07:44:22.812242 systemd[1]: run-containerd-runc-k8s.io-4e3087a0b8c7479300bf25660589c2b4a81474cbc4def7f6fc535cf82f44643c-runc.c9so8f.mount: Deactivated successfully. Oct 9 07:44:23.993905 kubelet[2836]: I1009 07:44:23.991970 2836 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-qq7rv" podStartSLOduration=40.74892921 podStartE2EDuration="47.991926238s" podCreationTimestamp="2024-10-09 07:43:36 +0000 UTC" firstStartedPulling="2024-10-09 07:44:09.724793502 +0000 UTC m=+54.520886154" lastFinishedPulling="2024-10-09 07:44:16.96779053 +0000 UTC m=+61.763883182" observedRunningTime="2024-10-09 07:44:17.123603116 +0000 UTC m=+61.919695778" watchObservedRunningTime="2024-10-09 07:44:23.991926238 +0000 UTC m=+68.788018900" Oct 9 07:44:26.030352 kubelet[2836]: I1009 07:44:26.030189 2836 topology_manager.go:215] "Topology Admit Handler" podUID="023d7701-7aa7-45fb-afbb-5bbce3c17aa6" podNamespace="calico-apiserver" podName="calico-apiserver-7d6fbc7845-k4fv8" Oct 9 07:44:26.043186 kubelet[2836]: I1009 07:44:26.042469 2836 topology_manager.go:215] "Topology Admit Handler" podUID="2681edb7-52bf-4bfd-8e1c-683a742d4621" podNamespace="calico-apiserver" podName="calico-apiserver-7d6fbc7845-927st" Oct 9 07:44:26.099806 kubelet[2836]: I1009 07:44:26.096216 2836 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wqmx\" (UniqueName: \"kubernetes.io/projected/023d7701-7aa7-45fb-afbb-5bbce3c17aa6-kube-api-access-4wqmx\") pod \"calico-apiserver-7d6fbc7845-k4fv8\" (UID: \"023d7701-7aa7-45fb-afbb-5bbce3c17aa6\") " pod="calico-apiserver/calico-apiserver-7d6fbc7845-k4fv8" Oct 9 07:44:26.104158 kubelet[2836]: I1009 07:44:26.104116 2836 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcv7t\" (UniqueName: \"kubernetes.io/projected/2681edb7-52bf-4bfd-8e1c-683a742d4621-kube-api-access-qcv7t\") pod \"calico-apiserver-7d6fbc7845-927st\" (UID: \"2681edb7-52bf-4bfd-8e1c-683a742d4621\") " pod="calico-apiserver/calico-apiserver-7d6fbc7845-927st" Oct 9 07:44:26.105186 kubelet[2836]: I1009 07:44:26.105172 2836 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/023d7701-7aa7-45fb-afbb-5bbce3c17aa6-calico-apiserver-certs\") pod \"calico-apiserver-7d6fbc7845-k4fv8\" (UID: \"023d7701-7aa7-45fb-afbb-5bbce3c17aa6\") " pod="calico-apiserver/calico-apiserver-7d6fbc7845-k4fv8" Oct 9 07:44:26.105308 kubelet[2836]: I1009 07:44:26.105297 2836 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2681edb7-52bf-4bfd-8e1c-683a742d4621-calico-apiserver-certs\") pod \"calico-apiserver-7d6fbc7845-927st\" (UID: \"2681edb7-52bf-4bfd-8e1c-683a742d4621\") " pod="calico-apiserver/calico-apiserver-7d6fbc7845-927st" Oct 9 07:44:26.210790 kubelet[2836]: E1009 07:44:26.210415 2836 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Oct 9 07:44:26.216990 kubelet[2836]: E1009 07:44:26.216935 2836 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Oct 9 07:44:26.239541 kubelet[2836]: E1009 07:44:26.237570 2836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2681edb7-52bf-4bfd-8e1c-683a742d4621-calico-apiserver-certs podName:2681edb7-52bf-4bfd-8e1c-683a742d4621 nodeName:}" failed. No retries permitted until 2024-10-09 07:44:26.711672287 +0000 UTC m=+71.507764929 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/2681edb7-52bf-4bfd-8e1c-683a742d4621-calico-apiserver-certs") pod "calico-apiserver-7d6fbc7845-927st" (UID: "2681edb7-52bf-4bfd-8e1c-683a742d4621") : secret "calico-apiserver-certs" not found Oct 9 07:44:26.239541 kubelet[2836]: E1009 07:44:26.237605 2836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/023d7701-7aa7-45fb-afbb-5bbce3c17aa6-calico-apiserver-certs podName:023d7701-7aa7-45fb-afbb-5bbce3c17aa6 nodeName:}" failed. No retries permitted until 2024-10-09 07:44:26.737591628 +0000 UTC m=+71.533684270 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/023d7701-7aa7-45fb-afbb-5bbce3c17aa6-calico-apiserver-certs") pod "calico-apiserver-7d6fbc7845-k4fv8" (UID: "023d7701-7aa7-45fb-afbb-5bbce3c17aa6") : secret "calico-apiserver-certs" not found Oct 9 07:44:26.619043 systemd-journald[1128]: Under memory pressure, flushing caches. Oct 9 07:44:26.616818 systemd-resolved[1467]: Under memory pressure, flushing caches. Oct 9 07:44:26.616849 systemd-resolved[1467]: Flushed all caches. Oct 9 07:44:26.665423 systemd[1]: Started sshd@9-172.24.4.70:22-172.24.4.1:36442.service - OpenSSH per-connection server daemon (172.24.4.1:36442). Oct 9 07:44:26.955361 containerd[1575]: time="2024-10-09T07:44:26.955144719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d6fbc7845-k4fv8,Uid:023d7701-7aa7-45fb-afbb-5bbce3c17aa6,Namespace:calico-apiserver,Attempt:0,}" Oct 9 07:44:27.039959 containerd[1575]: time="2024-10-09T07:44:27.039318402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d6fbc7845-927st,Uid:2681edb7-52bf-4bfd-8e1c-683a742d4621,Namespace:calico-apiserver,Attempt:0,}" Oct 9 07:44:27.186604 systemd-networkd[1211]: calie06c2261622: Link UP Oct 9 07:44:27.186808 systemd-networkd[1211]: calie06c2261622: Gained carrier Oct 9 07:44:27.219492 containerd[1575]: 2024-10-09 07:44:27.063 [INFO][5088] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975--2--2--3--e7db599e29.novalocal-k8s-calico--apiserver--7d6fbc7845--k4fv8-eth0 calico-apiserver-7d6fbc7845- calico-apiserver 023d7701-7aa7-45fb-afbb-5bbce3c17aa6 867 0 2024-10-09 07:44:25 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7d6fbc7845 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3975-2-2-3-e7db599e29.novalocal calico-apiserver-7d6fbc7845-k4fv8 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie06c2261622 [] []}} ContainerID="089dcad480bd842d906a672bc40abe0192cf406e4b3b76d523c32682ec64caa2" Namespace="calico-apiserver" Pod="calico-apiserver-7d6fbc7845-k4fv8" WorkloadEndpoint="ci--3975--2--2--3--e7db599e29.novalocal-k8s-calico--apiserver--7d6fbc7845--k4fv8-" Oct 9 07:44:27.219492 containerd[1575]: 2024-10-09 07:44:27.063 [INFO][5088] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="089dcad480bd842d906a672bc40abe0192cf406e4b3b76d523c32682ec64caa2" Namespace="calico-apiserver" Pod="calico-apiserver-7d6fbc7845-k4fv8" WorkloadEndpoint="ci--3975--2--2--3--e7db599e29.novalocal-k8s-calico--apiserver--7d6fbc7845--k4fv8-eth0" Oct 9 07:44:27.219492 containerd[1575]: 2024-10-09 07:44:27.110 [INFO][5109] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="089dcad480bd842d906a672bc40abe0192cf406e4b3b76d523c32682ec64caa2" HandleID="k8s-pod-network.089dcad480bd842d906a672bc40abe0192cf406e4b3b76d523c32682ec64caa2" Workload="ci--3975--2--2--3--e7db599e29.novalocal-k8s-calico--apiserver--7d6fbc7845--k4fv8-eth0" Oct 9 07:44:27.219492 containerd[1575]: 2024-10-09 07:44:27.135 [INFO][5109] ipam_plugin.go 270: Auto assigning IP ContainerID="089dcad480bd842d906a672bc40abe0192cf406e4b3b76d523c32682ec64caa2" HandleID="k8s-pod-network.089dcad480bd842d906a672bc40abe0192cf406e4b3b76d523c32682ec64caa2" Workload="ci--3975--2--2--3--e7db599e29.novalocal-k8s-calico--apiserver--7d6fbc7845--k4fv8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003183d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3975-2-2-3-e7db599e29.novalocal", "pod":"calico-apiserver-7d6fbc7845-k4fv8", "timestamp":"2024-10-09 07:44:27.110939807 +0000 UTC"}, Hostname:"ci-3975-2-2-3-e7db599e29.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 07:44:27.219492 containerd[1575]: 2024-10-09 07:44:27.135 [INFO][5109] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:44:27.219492 containerd[1575]: 2024-10-09 07:44:27.135 [INFO][5109] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:44:27.219492 containerd[1575]: 2024-10-09 07:44:27.135 [INFO][5109] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975-2-2-3-e7db599e29.novalocal' Oct 9 07:44:27.219492 containerd[1575]: 2024-10-09 07:44:27.139 [INFO][5109] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.089dcad480bd842d906a672bc40abe0192cf406e4b3b76d523c32682ec64caa2" host="ci-3975-2-2-3-e7db599e29.novalocal" Oct 9 07:44:27.219492 containerd[1575]: 2024-10-09 07:44:27.143 [INFO][5109] ipam.go 372: Looking up existing affinities for host host="ci-3975-2-2-3-e7db599e29.novalocal" Oct 9 07:44:27.219492 containerd[1575]: 2024-10-09 07:44:27.148 [INFO][5109] ipam.go 489: Trying affinity for 192.168.74.0/26 host="ci-3975-2-2-3-e7db599e29.novalocal" Oct 9 07:44:27.219492 containerd[1575]: 2024-10-09 07:44:27.151 [INFO][5109] ipam.go 155: Attempting to load block cidr=192.168.74.0/26 host="ci-3975-2-2-3-e7db599e29.novalocal" Oct 9 07:44:27.219492 containerd[1575]: 2024-10-09 07:44:27.154 [INFO][5109] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.74.0/26 host="ci-3975-2-2-3-e7db599e29.novalocal" Oct 9 07:44:27.219492 containerd[1575]: 2024-10-09 07:44:27.154 [INFO][5109] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.74.0/26 handle="k8s-pod-network.089dcad480bd842d906a672bc40abe0192cf406e4b3b76d523c32682ec64caa2" host="ci-3975-2-2-3-e7db599e29.novalocal" Oct 9 07:44:27.219492 containerd[1575]: 2024-10-09 07:44:27.156 [INFO][5109] ipam.go 1685: Creating new handle: k8s-pod-network.089dcad480bd842d906a672bc40abe0192cf406e4b3b76d523c32682ec64caa2 Oct 9 07:44:27.219492 containerd[1575]: 2024-10-09 07:44:27.161 [INFO][5109] ipam.go 1203: Writing block in order to claim IPs block=192.168.74.0/26 handle="k8s-pod-network.089dcad480bd842d906a672bc40abe0192cf406e4b3b76d523c32682ec64caa2" host="ci-3975-2-2-3-e7db599e29.novalocal" Oct 9 07:44:27.219492 containerd[1575]: 2024-10-09 07:44:27.173 [INFO][5109] ipam.go 1216: Successfully claimed IPs: [192.168.74.5/26] block=192.168.74.0/26 handle="k8s-pod-network.089dcad480bd842d906a672bc40abe0192cf406e4b3b76d523c32682ec64caa2" host="ci-3975-2-2-3-e7db599e29.novalocal" Oct 9 07:44:27.219492 containerd[1575]: 2024-10-09 07:44:27.173 [INFO][5109] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.74.5/26] handle="k8s-pod-network.089dcad480bd842d906a672bc40abe0192cf406e4b3b76d523c32682ec64caa2" host="ci-3975-2-2-3-e7db599e29.novalocal" Oct 9 07:44:27.219492 containerd[1575]: 2024-10-09 07:44:27.173 [INFO][5109] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:44:27.219492 containerd[1575]: 2024-10-09 07:44:27.173 [INFO][5109] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.74.5/26] IPv6=[] ContainerID="089dcad480bd842d906a672bc40abe0192cf406e4b3b76d523c32682ec64caa2" HandleID="k8s-pod-network.089dcad480bd842d906a672bc40abe0192cf406e4b3b76d523c32682ec64caa2" Workload="ci--3975--2--2--3--e7db599e29.novalocal-k8s-calico--apiserver--7d6fbc7845--k4fv8-eth0" Oct 9 07:44:27.221429 containerd[1575]: 2024-10-09 07:44:27.177 [INFO][5088] k8s.go 386: Populated endpoint ContainerID="089dcad480bd842d906a672bc40abe0192cf406e4b3b76d523c32682ec64caa2" Namespace="calico-apiserver" Pod="calico-apiserver-7d6fbc7845-k4fv8" WorkloadEndpoint="ci--3975--2--2--3--e7db599e29.novalocal-k8s-calico--apiserver--7d6fbc7845--k4fv8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--3--e7db599e29.novalocal-k8s-calico--apiserver--7d6fbc7845--k4fv8-eth0", GenerateName:"calico-apiserver-7d6fbc7845-", Namespace:"calico-apiserver", SelfLink:"", UID:"023d7701-7aa7-45fb-afbb-5bbce3c17aa6", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 44, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d6fbc7845", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-3-e7db599e29.novalocal", ContainerID:"", Pod:"calico-apiserver-7d6fbc7845-k4fv8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.74.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie06c2261622", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:44:27.221429 containerd[1575]: 2024-10-09 07:44:27.181 [INFO][5088] k8s.go 387: Calico CNI using IPs: [192.168.74.5/32] ContainerID="089dcad480bd842d906a672bc40abe0192cf406e4b3b76d523c32682ec64caa2" Namespace="calico-apiserver" Pod="calico-apiserver-7d6fbc7845-k4fv8" WorkloadEndpoint="ci--3975--2--2--3--e7db599e29.novalocal-k8s-calico--apiserver--7d6fbc7845--k4fv8-eth0" Oct 9 07:44:27.221429 containerd[1575]: 2024-10-09 07:44:27.182 [INFO][5088] dataplane_linux.go 68: Setting the host side veth name to calie06c2261622 ContainerID="089dcad480bd842d906a672bc40abe0192cf406e4b3b76d523c32682ec64caa2" Namespace="calico-apiserver" Pod="calico-apiserver-7d6fbc7845-k4fv8" WorkloadEndpoint="ci--3975--2--2--3--e7db599e29.novalocal-k8s-calico--apiserver--7d6fbc7845--k4fv8-eth0" Oct 9 07:44:27.221429 containerd[1575]: 2024-10-09 07:44:27.190 [INFO][5088] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="089dcad480bd842d906a672bc40abe0192cf406e4b3b76d523c32682ec64caa2" Namespace="calico-apiserver" Pod="calico-apiserver-7d6fbc7845-k4fv8" WorkloadEndpoint="ci--3975--2--2--3--e7db599e29.novalocal-k8s-calico--apiserver--7d6fbc7845--k4fv8-eth0" Oct 9 07:44:27.221429 containerd[1575]: 2024-10-09 07:44:27.190 [INFO][5088] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="089dcad480bd842d906a672bc40abe0192cf406e4b3b76d523c32682ec64caa2" Namespace="calico-apiserver" Pod="calico-apiserver-7d6fbc7845-k4fv8" WorkloadEndpoint="ci--3975--2--2--3--e7db599e29.novalocal-k8s-calico--apiserver--7d6fbc7845--k4fv8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--3--e7db599e29.novalocal-k8s-calico--apiserver--7d6fbc7845--k4fv8-eth0", GenerateName:"calico-apiserver-7d6fbc7845-", Namespace:"calico-apiserver", SelfLink:"", UID:"023d7701-7aa7-45fb-afbb-5bbce3c17aa6", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 44, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d6fbc7845", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-3-e7db599e29.novalocal", ContainerID:"089dcad480bd842d906a672bc40abe0192cf406e4b3b76d523c32682ec64caa2", Pod:"calico-apiserver-7d6fbc7845-k4fv8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.74.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie06c2261622", MAC:"62:14:56:db:3f:2d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:44:27.221429 containerd[1575]: 2024-10-09 07:44:27.212 [INFO][5088] k8s.go 500: Wrote updated endpoint to datastore ContainerID="089dcad480bd842d906a672bc40abe0192cf406e4b3b76d523c32682ec64caa2" Namespace="calico-apiserver" Pod="calico-apiserver-7d6fbc7845-k4fv8" WorkloadEndpoint="ci--3975--2--2--3--e7db599e29.novalocal-k8s-calico--apiserver--7d6fbc7845--k4fv8-eth0" Oct 9 07:44:27.268204 containerd[1575]: time="2024-10-09T07:44:27.267823113Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:44:27.268204 containerd[1575]: time="2024-10-09T07:44:27.267914314Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:44:27.268204 containerd[1575]: time="2024-10-09T07:44:27.267970179Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:44:27.268204 containerd[1575]: time="2024-10-09T07:44:27.267988463Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:44:27.299998 systemd-networkd[1211]: cali301d3aca185: Link UP Oct 9 07:44:27.303288 systemd-networkd[1211]: cali301d3aca185: Gained carrier Oct 9 07:44:27.330990 containerd[1575]: 2024-10-09 07:44:27.102 [INFO][5099] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975--2--2--3--e7db599e29.novalocal-k8s-calico--apiserver--7d6fbc7845--927st-eth0 calico-apiserver-7d6fbc7845- calico-apiserver 2681edb7-52bf-4bfd-8e1c-683a742d4621 870 0 2024-10-09 07:44:26 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7d6fbc7845 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3975-2-2-3-e7db599e29.novalocal calico-apiserver-7d6fbc7845-927st eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali301d3aca185 [] []}} ContainerID="15d8fe21bf606fd4c6e589f0c435c6800cd8eeb8f27a4d7f5da055419e84639d" Namespace="calico-apiserver" Pod="calico-apiserver-7d6fbc7845-927st" WorkloadEndpoint="ci--3975--2--2--3--e7db599e29.novalocal-k8s-calico--apiserver--7d6fbc7845--927st-" Oct 9 07:44:27.330990 containerd[1575]: 2024-10-09 07:44:27.102 [INFO][5099] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="15d8fe21bf606fd4c6e589f0c435c6800cd8eeb8f27a4d7f5da055419e84639d" Namespace="calico-apiserver" Pod="calico-apiserver-7d6fbc7845-927st" WorkloadEndpoint="ci--3975--2--2--3--e7db599e29.novalocal-k8s-calico--apiserver--7d6fbc7845--927st-eth0" Oct 9 07:44:27.330990 containerd[1575]: 2024-10-09 07:44:27.174 [INFO][5116] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="15d8fe21bf606fd4c6e589f0c435c6800cd8eeb8f27a4d7f5da055419e84639d" HandleID="k8s-pod-network.15d8fe21bf606fd4c6e589f0c435c6800cd8eeb8f27a4d7f5da055419e84639d" Workload="ci--3975--2--2--3--e7db599e29.novalocal-k8s-calico--apiserver--7d6fbc7845--927st-eth0" Oct 9 07:44:27.330990 containerd[1575]: 2024-10-09 07:44:27.191 [INFO][5116] ipam_plugin.go 270: Auto assigning IP ContainerID="15d8fe21bf606fd4c6e589f0c435c6800cd8eeb8f27a4d7f5da055419e84639d" HandleID="k8s-pod-network.15d8fe21bf606fd4c6e589f0c435c6800cd8eeb8f27a4d7f5da055419e84639d" Workload="ci--3975--2--2--3--e7db599e29.novalocal-k8s-calico--apiserver--7d6fbc7845--927st-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000050740), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3975-2-2-3-e7db599e29.novalocal", "pod":"calico-apiserver-7d6fbc7845-927st", "timestamp":"2024-10-09 07:44:27.174642262 +0000 UTC"}, Hostname:"ci-3975-2-2-3-e7db599e29.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 07:44:27.330990 containerd[1575]: 2024-10-09 07:44:27.192 [INFO][5116] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:44:27.330990 containerd[1575]: 2024-10-09 07:44:27.192 [INFO][5116] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:44:27.330990 containerd[1575]: 2024-10-09 07:44:27.192 [INFO][5116] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975-2-2-3-e7db599e29.novalocal' Oct 9 07:44:27.330990 containerd[1575]: 2024-10-09 07:44:27.196 [INFO][5116] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.15d8fe21bf606fd4c6e589f0c435c6800cd8eeb8f27a4d7f5da055419e84639d" host="ci-3975-2-2-3-e7db599e29.novalocal" Oct 9 07:44:27.330990 containerd[1575]: 2024-10-09 07:44:27.215 [INFO][5116] ipam.go 372: Looking up existing affinities for host host="ci-3975-2-2-3-e7db599e29.novalocal" Oct 9 07:44:27.330990 containerd[1575]: 2024-10-09 07:44:27.224 [INFO][5116] ipam.go 489: Trying affinity for 192.168.74.0/26 host="ci-3975-2-2-3-e7db599e29.novalocal" Oct 9 07:44:27.330990 containerd[1575]: 2024-10-09 07:44:27.229 [INFO][5116] ipam.go 155: Attempting to load block cidr=192.168.74.0/26 host="ci-3975-2-2-3-e7db599e29.novalocal" Oct 9 07:44:27.330990 containerd[1575]: 2024-10-09 07:44:27.250 [INFO][5116] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.74.0/26 host="ci-3975-2-2-3-e7db599e29.novalocal" Oct 9 07:44:27.330990 containerd[1575]: 2024-10-09 07:44:27.250 [INFO][5116] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.74.0/26 handle="k8s-pod-network.15d8fe21bf606fd4c6e589f0c435c6800cd8eeb8f27a4d7f5da055419e84639d" host="ci-3975-2-2-3-e7db599e29.novalocal" Oct 9 07:44:27.330990 containerd[1575]: 2024-10-09 07:44:27.255 [INFO][5116] ipam.go 1685: Creating new handle: k8s-pod-network.15d8fe21bf606fd4c6e589f0c435c6800cd8eeb8f27a4d7f5da055419e84639d Oct 9 07:44:27.330990 containerd[1575]: 2024-10-09 07:44:27.266 [INFO][5116] ipam.go 1203: Writing block in order to claim IPs block=192.168.74.0/26 handle="k8s-pod-network.15d8fe21bf606fd4c6e589f0c435c6800cd8eeb8f27a4d7f5da055419e84639d" host="ci-3975-2-2-3-e7db599e29.novalocal" Oct 9 07:44:27.330990 containerd[1575]: 2024-10-09 07:44:27.282 [INFO][5116] ipam.go 1216: Successfully claimed IPs: [192.168.74.6/26] block=192.168.74.0/26 handle="k8s-pod-network.15d8fe21bf606fd4c6e589f0c435c6800cd8eeb8f27a4d7f5da055419e84639d" host="ci-3975-2-2-3-e7db599e29.novalocal" Oct 9 07:44:27.330990 containerd[1575]: 2024-10-09 07:44:27.282 [INFO][5116] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.74.6/26] handle="k8s-pod-network.15d8fe21bf606fd4c6e589f0c435c6800cd8eeb8f27a4d7f5da055419e84639d" host="ci-3975-2-2-3-e7db599e29.novalocal" Oct 9 07:44:27.330990 containerd[1575]: 2024-10-09 07:44:27.282 [INFO][5116] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:44:27.330990 containerd[1575]: 2024-10-09 07:44:27.282 [INFO][5116] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.74.6/26] IPv6=[] ContainerID="15d8fe21bf606fd4c6e589f0c435c6800cd8eeb8f27a4d7f5da055419e84639d" HandleID="k8s-pod-network.15d8fe21bf606fd4c6e589f0c435c6800cd8eeb8f27a4d7f5da055419e84639d" Workload="ci--3975--2--2--3--e7db599e29.novalocal-k8s-calico--apiserver--7d6fbc7845--927st-eth0" Oct 9 07:44:27.333885 containerd[1575]: 2024-10-09 07:44:27.285 [INFO][5099] k8s.go 386: Populated endpoint ContainerID="15d8fe21bf606fd4c6e589f0c435c6800cd8eeb8f27a4d7f5da055419e84639d" Namespace="calico-apiserver" Pod="calico-apiserver-7d6fbc7845-927st" WorkloadEndpoint="ci--3975--2--2--3--e7db599e29.novalocal-k8s-calico--apiserver--7d6fbc7845--927st-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--3--e7db599e29.novalocal-k8s-calico--apiserver--7d6fbc7845--927st-eth0", GenerateName:"calico-apiserver-7d6fbc7845-", Namespace:"calico-apiserver", SelfLink:"", UID:"2681edb7-52bf-4bfd-8e1c-683a742d4621", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 44, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d6fbc7845", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-3-e7db599e29.novalocal", ContainerID:"", Pod:"calico-apiserver-7d6fbc7845-927st", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.74.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali301d3aca185", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:44:27.333885 containerd[1575]: 2024-10-09 07:44:27.285 [INFO][5099] k8s.go 387: Calico CNI using IPs: [192.168.74.6/32] ContainerID="15d8fe21bf606fd4c6e589f0c435c6800cd8eeb8f27a4d7f5da055419e84639d" Namespace="calico-apiserver" Pod="calico-apiserver-7d6fbc7845-927st" WorkloadEndpoint="ci--3975--2--2--3--e7db599e29.novalocal-k8s-calico--apiserver--7d6fbc7845--927st-eth0" Oct 9 07:44:27.333885 containerd[1575]: 2024-10-09 07:44:27.286 [INFO][5099] dataplane_linux.go 68: Setting the host side veth name to cali301d3aca185 ContainerID="15d8fe21bf606fd4c6e589f0c435c6800cd8eeb8f27a4d7f5da055419e84639d" Namespace="calico-apiserver" Pod="calico-apiserver-7d6fbc7845-927st" WorkloadEndpoint="ci--3975--2--2--3--e7db599e29.novalocal-k8s-calico--apiserver--7d6fbc7845--927st-eth0" Oct 9 07:44:27.333885 containerd[1575]: 2024-10-09 07:44:27.302 [INFO][5099] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="15d8fe21bf606fd4c6e589f0c435c6800cd8eeb8f27a4d7f5da055419e84639d" Namespace="calico-apiserver" Pod="calico-apiserver-7d6fbc7845-927st" WorkloadEndpoint="ci--3975--2--2--3--e7db599e29.novalocal-k8s-calico--apiserver--7d6fbc7845--927st-eth0" Oct 9 07:44:27.333885 containerd[1575]: 2024-10-09 07:44:27.305 [INFO][5099] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="15d8fe21bf606fd4c6e589f0c435c6800cd8eeb8f27a4d7f5da055419e84639d" Namespace="calico-apiserver" Pod="calico-apiserver-7d6fbc7845-927st" WorkloadEndpoint="ci--3975--2--2--3--e7db599e29.novalocal-k8s-calico--apiserver--7d6fbc7845--927st-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--3--e7db599e29.novalocal-k8s-calico--apiserver--7d6fbc7845--927st-eth0", GenerateName:"calico-apiserver-7d6fbc7845-", Namespace:"calico-apiserver", SelfLink:"", UID:"2681edb7-52bf-4bfd-8e1c-683a742d4621", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 44, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d6fbc7845", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-3-e7db599e29.novalocal", ContainerID:"15d8fe21bf606fd4c6e589f0c435c6800cd8eeb8f27a4d7f5da055419e84639d", Pod:"calico-apiserver-7d6fbc7845-927st", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.74.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali301d3aca185", MAC:"f2:6a:0f:11:d8:8f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:44:27.333885 containerd[1575]: 2024-10-09 07:44:27.322 [INFO][5099] k8s.go 500: Wrote updated endpoint to datastore ContainerID="15d8fe21bf606fd4c6e589f0c435c6800cd8eeb8f27a4d7f5da055419e84639d" Namespace="calico-apiserver" Pod="calico-apiserver-7d6fbc7845-927st" WorkloadEndpoint="ci--3975--2--2--3--e7db599e29.novalocal-k8s-calico--apiserver--7d6fbc7845--927st-eth0" Oct 9 07:44:27.380671 containerd[1575]: time="2024-10-09T07:44:27.380631453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d6fbc7845-k4fv8,Uid:023d7701-7aa7-45fb-afbb-5bbce3c17aa6,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"089dcad480bd842d906a672bc40abe0192cf406e4b3b76d523c32682ec64caa2\"" Oct 9 07:44:27.382790 containerd[1575]: time="2024-10-09T07:44:27.382669189Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Oct 9 07:44:27.391365 containerd[1575]: time="2024-10-09T07:44:27.391089621Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:44:27.391365 containerd[1575]: time="2024-10-09T07:44:27.391150676Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:44:27.391365 containerd[1575]: time="2024-10-09T07:44:27.391176285Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:44:27.391365 containerd[1575]: time="2024-10-09T07:44:27.391194599Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:44:27.462700 containerd[1575]: time="2024-10-09T07:44:27.462657606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d6fbc7845-927st,Uid:2681edb7-52bf-4bfd-8e1c-683a742d4621,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"15d8fe21bf606fd4c6e589f0c435c6800cd8eeb8f27a4d7f5da055419e84639d\"" Oct 9 07:44:28.178208 sshd[5083]: Accepted publickey for core from 172.24.4.1 port 36442 ssh2: RSA SHA256:iTqmmSA9RcIkWmF7myyFAqWL8kdaKMdVpBUk8UaNQPM Oct 9 07:44:28.184864 sshd[5083]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:44:28.228015 systemd-logind[1543]: New session 12 of user core. Oct 9 07:44:28.229366 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 9 07:44:29.112747 systemd-networkd[1211]: calie06c2261622: Gained IPv6LL Oct 9 07:44:29.305470 systemd-networkd[1211]: cali301d3aca185: Gained IPv6LL Oct 9 07:44:29.807676 sshd[5083]: pam_unix(sshd:session): session closed for user core Oct 9 07:44:29.824061 systemd[1]: sshd@9-172.24.4.70:22-172.24.4.1:36442.service: Deactivated successfully. Oct 9 07:44:29.828088 systemd-logind[1543]: Session 12 logged out. Waiting for processes to exit. Oct 9 07:44:29.828606 systemd[1]: session-12.scope: Deactivated successfully. Oct 9 07:44:29.830712 systemd-logind[1543]: Removed session 12. Oct 9 07:44:30.586118 systemd-journald[1128]: Under memory pressure, flushing caches. Oct 9 07:44:30.585600 systemd-resolved[1467]: Under memory pressure, flushing caches. Oct 9 07:44:30.585635 systemd-resolved[1467]: Flushed all caches. Oct 9 07:44:31.250663 containerd[1575]: time="2024-10-09T07:44:31.250625929Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:44:31.255836 containerd[1575]: time="2024-10-09T07:44:31.255634556Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=40419849" Oct 9 07:44:31.257806 containerd[1575]: time="2024-10-09T07:44:31.257616800Z" level=info msg="ImageCreate event name:\"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:44:31.264747 containerd[1575]: time="2024-10-09T07:44:31.264460543Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:44:31.266456 containerd[1575]: time="2024-10-09T07:44:31.266351605Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"41912266\" in 3.883647971s" Oct 9 07:44:31.266456 containerd[1575]: time="2024-10-09T07:44:31.266385618Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\"" Oct 9 07:44:31.268407 containerd[1575]: time="2024-10-09T07:44:31.267802199Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Oct 9 07:44:31.269141 containerd[1575]: time="2024-10-09T07:44:31.269034273Z" level=info msg="CreateContainer within sandbox \"089dcad480bd842d906a672bc40abe0192cf406e4b3b76d523c32682ec64caa2\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 9 07:44:31.292258 containerd[1575]: time="2024-10-09T07:44:31.292128568Z" level=info msg="CreateContainer within sandbox \"089dcad480bd842d906a672bc40abe0192cf406e4b3b76d523c32682ec64caa2\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d575a3389ac7578e9c14dc2161954c81bb2752e4fe4fdff2c47c8e9ef47fb626\"" Oct 9 07:44:31.293084 containerd[1575]: time="2024-10-09T07:44:31.293050389Z" level=info msg="StartContainer for \"d575a3389ac7578e9c14dc2161954c81bb2752e4fe4fdff2c47c8e9ef47fb626\"" Oct 9 07:44:31.382173 containerd[1575]: time="2024-10-09T07:44:31.382069462Z" level=info msg="StartContainer for \"d575a3389ac7578e9c14dc2161954c81bb2752e4fe4fdff2c47c8e9ef47fb626\" returns successfully" Oct 9 07:44:31.691158 containerd[1575]: time="2024-10-09T07:44:31.691084273Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:44:31.693022 containerd[1575]: time="2024-10-09T07:44:31.692838898Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=77" Oct 9 07:44:31.699533 containerd[1575]: time="2024-10-09T07:44:31.699454905Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"41912266\" in 431.613191ms" Oct 9 07:44:31.699755 containerd[1575]: time="2024-10-09T07:44:31.699493116Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\"" Oct 9 07:44:31.702909 containerd[1575]: time="2024-10-09T07:44:31.702865400Z" level=info msg="CreateContainer within sandbox \"15d8fe21bf606fd4c6e589f0c435c6800cd8eeb8f27a4d7f5da055419e84639d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 9 07:44:31.735594 containerd[1575]: time="2024-10-09T07:44:31.735444268Z" level=info msg="CreateContainer within sandbox \"15d8fe21bf606fd4c6e589f0c435c6800cd8eeb8f27a4d7f5da055419e84639d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1c24bf4aa69d0b80b7f9b7c28c9653cca6136fb458eb2c30378ec3b26e3d672e\"" Oct 9 07:44:31.736875 containerd[1575]: time="2024-10-09T07:44:31.736829701Z" level=info msg="StartContainer for \"1c24bf4aa69d0b80b7f9b7c28c9653cca6136fb458eb2c30378ec3b26e3d672e\"" Oct 9 07:44:31.868484 containerd[1575]: time="2024-10-09T07:44:31.866369227Z" level=info msg="StartContainer for \"1c24bf4aa69d0b80b7f9b7c28c9653cca6136fb458eb2c30378ec3b26e3d672e\" returns successfully" Oct 9 07:44:32.205914 kubelet[2836]: I1009 07:44:32.205383 2836 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7d6fbc7845-927st" podStartSLOduration=1.968199933 podStartE2EDuration="6.20340475s" podCreationTimestamp="2024-10-09 07:44:26 +0000 UTC" firstStartedPulling="2024-10-09 07:44:27.464724298 +0000 UTC m=+72.260816950" lastFinishedPulling="2024-10-09 07:44:31.699929125 +0000 UTC m=+76.496021767" observedRunningTime="2024-10-09 07:44:32.183781062 +0000 UTC m=+76.979873724" watchObservedRunningTime="2024-10-09 07:44:32.20340475 +0000 UTC m=+76.999497392" Oct 9 07:44:32.237397 kubelet[2836]: I1009 07:44:32.237358 2836 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7d6fbc7845-k4fv8" podStartSLOduration=3.352906521 podStartE2EDuration="7.237302096s" podCreationTimestamp="2024-10-09 07:44:25 +0000 UTC" firstStartedPulling="2024-10-09 07:44:27.382271983 +0000 UTC m=+72.178364625" lastFinishedPulling="2024-10-09 07:44:31.266667558 +0000 UTC m=+76.062760200" observedRunningTime="2024-10-09 07:44:32.20549786 +0000 UTC m=+77.001590523" watchObservedRunningTime="2024-10-09 07:44:32.237302096 +0000 UTC m=+77.033394748" Oct 9 07:44:32.633934 systemd-resolved[1467]: Under memory pressure, flushing caches. Oct 9 07:44:32.635962 systemd-journald[1128]: Under memory pressure, flushing caches. Oct 9 07:44:32.633954 systemd-resolved[1467]: Flushed all caches. Oct 9 07:44:34.823065 systemd[1]: Started sshd@10-172.24.4.70:22-172.24.4.1:57070.service - OpenSSH per-connection server daemon (172.24.4.1:57070). Oct 9 07:44:35.886139 sshd[5341]: Accepted publickey for core from 172.24.4.1 port 57070 ssh2: RSA SHA256:iTqmmSA9RcIkWmF7myyFAqWL8kdaKMdVpBUk8UaNQPM Oct 9 07:44:35.893266 sshd[5341]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:44:35.904731 systemd-logind[1543]: New session 13 of user core. Oct 9 07:44:35.911922 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 9 07:44:37.975094 sshd[5341]: pam_unix(sshd:session): session closed for user core Oct 9 07:44:37.983279 systemd[1]: sshd@10-172.24.4.70:22-172.24.4.1:57070.service: Deactivated successfully. Oct 9 07:44:37.991461 systemd-logind[1543]: Session 13 logged out. Waiting for processes to exit. Oct 9 07:44:37.994891 systemd[1]: session-13.scope: Deactivated successfully. Oct 9 07:44:37.997859 systemd-logind[1543]: Removed session 13. Oct 9 07:44:38.585111 systemd-resolved[1467]: Under memory pressure, flushing caches. Oct 9 07:44:38.586568 systemd-journald[1128]: Under memory pressure, flushing caches. Oct 9 07:44:38.585150 systemd-resolved[1467]: Flushed all caches. Oct 9 07:44:42.987917 systemd[1]: Started sshd@11-172.24.4.70:22-172.24.4.1:57074.service - OpenSSH per-connection server daemon (172.24.4.1:57074). Oct 9 07:44:44.254596 sshd[5363]: Accepted publickey for core from 172.24.4.1 port 57074 ssh2: RSA SHA256:iTqmmSA9RcIkWmF7myyFAqWL8kdaKMdVpBUk8UaNQPM Oct 9 07:44:44.256736 sshd[5363]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:44:44.272986 systemd-logind[1543]: New session 14 of user core. Oct 9 07:44:44.280144 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 9 07:44:45.080121 sshd[5363]: pam_unix(sshd:session): session closed for user core Oct 9 07:44:45.080148 systemd[1]: Started sshd@12-172.24.4.70:22-172.24.4.1:50538.service - OpenSSH per-connection server daemon (172.24.4.1:50538). Oct 9 07:44:45.100609 systemd[1]: sshd@11-172.24.4.70:22-172.24.4.1:57074.service: Deactivated successfully. Oct 9 07:44:45.103619 systemd-logind[1543]: Session 14 logged out. Waiting for processes to exit. Oct 9 07:44:45.109199 systemd[1]: session-14.scope: Deactivated successfully. Oct 9 07:44:45.110590 systemd-logind[1543]: Removed session 14. Oct 9 07:44:46.516594 sshd[5375]: Accepted publickey for core from 172.24.4.1 port 50538 ssh2: RSA SHA256:iTqmmSA9RcIkWmF7myyFAqWL8kdaKMdVpBUk8UaNQPM Oct 9 07:44:46.519924 sshd[5375]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:44:46.525911 systemd-logind[1543]: New session 15 of user core. Oct 9 07:44:46.531097 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 9 07:44:46.586761 systemd-journald[1128]: Under memory pressure, flushing caches. Oct 9 07:44:46.584849 systemd-resolved[1467]: Under memory pressure, flushing caches. Oct 9 07:44:46.584855 systemd-resolved[1467]: Flushed all caches. Oct 9 07:44:47.444449 sshd[5375]: pam_unix(sshd:session): session closed for user core Oct 9 07:44:47.454061 systemd[1]: Started sshd@13-172.24.4.70:22-172.24.4.1:50548.service - OpenSSH per-connection server daemon (172.24.4.1:50548). Oct 9 07:44:47.455514 systemd[1]: sshd@12-172.24.4.70:22-172.24.4.1:50538.service: Deactivated successfully. Oct 9 07:44:47.459447 systemd[1]: session-15.scope: Deactivated successfully. Oct 9 07:44:47.468918 systemd-logind[1543]: Session 15 logged out. Waiting for processes to exit. Oct 9 07:44:47.471985 systemd-logind[1543]: Removed session 15. Oct 9 07:44:48.632852 systemd-resolved[1467]: Under memory pressure, flushing caches. Oct 9 07:44:48.636065 systemd-journald[1128]: Under memory pressure, flushing caches. Oct 9 07:44:48.632891 systemd-resolved[1467]: Flushed all caches. Oct 9 07:44:48.769385 sshd[5397]: Accepted publickey for core from 172.24.4.1 port 50548 ssh2: RSA SHA256:iTqmmSA9RcIkWmF7myyFAqWL8kdaKMdVpBUk8UaNQPM Oct 9 07:44:48.772674 sshd[5397]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:44:48.784490 systemd-logind[1543]: New session 16 of user core. Oct 9 07:44:48.791226 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 9 07:44:49.712799 sshd[5397]: pam_unix(sshd:session): session closed for user core Oct 9 07:44:49.741776 systemd[1]: sshd@13-172.24.4.70:22-172.24.4.1:50548.service: Deactivated successfully. Oct 9 07:44:49.750267 systemd[1]: session-16.scope: Deactivated successfully. Oct 9 07:44:49.755406 systemd-logind[1543]: Session 16 logged out. Waiting for processes to exit. Oct 9 07:44:49.758145 systemd-logind[1543]: Removed session 16. Oct 9 07:44:54.724040 systemd[1]: Started sshd@14-172.24.4.70:22-172.24.4.1:52528.service - OpenSSH per-connection server daemon (172.24.4.1:52528). Oct 9 07:44:56.143744 sshd[5464]: Accepted publickey for core from 172.24.4.1 port 52528 ssh2: RSA SHA256:iTqmmSA9RcIkWmF7myyFAqWL8kdaKMdVpBUk8UaNQPM Oct 9 07:44:56.147911 sshd[5464]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:44:56.160769 systemd-logind[1543]: New session 17 of user core. Oct 9 07:44:56.167031 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 9 07:44:56.923808 sshd[5464]: pam_unix(sshd:session): session closed for user core Oct 9 07:44:56.927279 systemd[1]: sshd@14-172.24.4.70:22-172.24.4.1:52528.service: Deactivated successfully. Oct 9 07:44:56.933283 systemd[1]: session-17.scope: Deactivated successfully. Oct 9 07:44:56.935057 systemd-logind[1543]: Session 17 logged out. Waiting for processes to exit. Oct 9 07:44:56.936757 systemd-logind[1543]: Removed session 17. Oct 9 07:45:01.936409 systemd[1]: Started sshd@15-172.24.4.70:22-172.24.4.1:52530.service - OpenSSH per-connection server daemon (172.24.4.1:52530). Oct 9 07:45:03.210265 sshd[5486]: Accepted publickey for core from 172.24.4.1 port 52530 ssh2: RSA SHA256:iTqmmSA9RcIkWmF7myyFAqWL8kdaKMdVpBUk8UaNQPM Oct 9 07:45:03.214983 sshd[5486]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:45:03.226499 systemd-logind[1543]: New session 18 of user core. Oct 9 07:45:03.231193 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 9 07:45:04.213945 sshd[5486]: pam_unix(sshd:session): session closed for user core Oct 9 07:45:04.221322 systemd[1]: sshd@15-172.24.4.70:22-172.24.4.1:52530.service: Deactivated successfully. Oct 9 07:45:04.227315 systemd-logind[1543]: Session 18 logged out. Waiting for processes to exit. Oct 9 07:45:04.228423 systemd[1]: session-18.scope: Deactivated successfully. Oct 9 07:45:04.233298 systemd-logind[1543]: Removed session 18. Oct 9 07:45:09.222908 systemd[1]: Started sshd@16-172.24.4.70:22-172.24.4.1:59644.service - OpenSSH per-connection server daemon (172.24.4.1:59644). Oct 9 07:45:10.517824 sshd[5507]: Accepted publickey for core from 172.24.4.1 port 59644 ssh2: RSA SHA256:iTqmmSA9RcIkWmF7myyFAqWL8kdaKMdVpBUk8UaNQPM Oct 9 07:45:10.521449 sshd[5507]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:45:10.533941 systemd-logind[1543]: New session 19 of user core. Oct 9 07:45:10.543851 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 9 07:45:11.402639 sshd[5507]: pam_unix(sshd:session): session closed for user core Oct 9 07:45:11.409819 systemd[1]: Started sshd@17-172.24.4.70:22-172.24.4.1:59648.service - OpenSSH per-connection server daemon (172.24.4.1:59648). Oct 9 07:45:11.411437 systemd[1]: sshd@16-172.24.4.70:22-172.24.4.1:59644.service: Deactivated successfully. Oct 9 07:45:11.416850 systemd[1]: session-19.scope: Deactivated successfully. Oct 9 07:45:11.418926 systemd-logind[1543]: Session 19 logged out. Waiting for processes to exit. Oct 9 07:45:11.420041 systemd-logind[1543]: Removed session 19. Oct 9 07:45:12.603929 sshd[5518]: Accepted publickey for core from 172.24.4.1 port 59648 ssh2: RSA SHA256:iTqmmSA9RcIkWmF7myyFAqWL8kdaKMdVpBUk8UaNQPM Oct 9 07:45:12.605828 sshd[5518]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:45:12.611085 systemd-logind[1543]: New session 20 of user core. Oct 9 07:45:12.620809 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 9 07:45:14.110427 sshd[5518]: pam_unix(sshd:session): session closed for user core Oct 9 07:45:14.116863 systemd[1]: Started sshd@18-172.24.4.70:22-172.24.4.1:59658.service - OpenSSH per-connection server daemon (172.24.4.1:59658). Oct 9 07:45:14.117356 systemd[1]: sshd@17-172.24.4.70:22-172.24.4.1:59648.service: Deactivated successfully. Oct 9 07:45:14.123963 systemd-logind[1543]: Session 20 logged out. Waiting for processes to exit. Oct 9 07:45:14.124485 systemd[1]: session-20.scope: Deactivated successfully. Oct 9 07:45:14.128973 systemd-logind[1543]: Removed session 20. Oct 9 07:45:15.537986 sshd[5544]: Accepted publickey for core from 172.24.4.1 port 59658 ssh2: RSA SHA256:iTqmmSA9RcIkWmF7myyFAqWL8kdaKMdVpBUk8UaNQPM Oct 9 07:45:15.543107 sshd[5544]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:45:15.560771 systemd-logind[1543]: New session 21 of user core. Oct 9 07:45:15.568851 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 9 07:45:18.702748 sshd[5544]: pam_unix(sshd:session): session closed for user core Oct 9 07:45:18.704476 systemd[1]: Started sshd@19-172.24.4.70:22-172.24.4.1:58638.service - OpenSSH per-connection server daemon (172.24.4.1:58638). Oct 9 07:45:18.713914 systemd[1]: sshd@18-172.24.4.70:22-172.24.4.1:59658.service: Deactivated successfully. Oct 9 07:45:18.727002 systemd[1]: session-21.scope: Deactivated successfully. Oct 9 07:45:18.729221 systemd-logind[1543]: Session 21 logged out. Waiting for processes to exit. Oct 9 07:45:18.732077 systemd-logind[1543]: Removed session 21. Oct 9 07:45:19.939708 sshd[5570]: Accepted publickey for core from 172.24.4.1 port 58638 ssh2: RSA SHA256:iTqmmSA9RcIkWmF7myyFAqWL8kdaKMdVpBUk8UaNQPM Oct 9 07:45:19.944049 sshd[5570]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:45:19.957711 systemd-logind[1543]: New session 22 of user core. Oct 9 07:45:19.965354 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 9 07:45:22.312395 sshd[5570]: pam_unix(sshd:session): session closed for user core Oct 9 07:45:22.325155 systemd[1]: Started sshd@20-172.24.4.70:22-172.24.4.1:58650.service - OpenSSH per-connection server daemon (172.24.4.1:58650). Oct 9 07:45:22.328280 systemd[1]: sshd@19-172.24.4.70:22-172.24.4.1:58638.service: Deactivated successfully. Oct 9 07:45:22.337384 systemd[1]: session-22.scope: Deactivated successfully. Oct 9 07:45:22.341360 systemd-logind[1543]: Session 22 logged out. Waiting for processes to exit. Oct 9 07:45:22.344753 systemd-logind[1543]: Removed session 22. Oct 9 07:45:22.626059 systemd-journald[1128]: Under memory pressure, flushing caches. Oct 9 07:45:22.619634 systemd-resolved[1467]: Under memory pressure, flushing caches. Oct 9 07:45:22.619673 systemd-resolved[1467]: Flushed all caches. Oct 9 07:45:22.808234 systemd[1]: run-containerd-runc-k8s.io-4e3087a0b8c7479300bf25660589c2b4a81474cbc4def7f6fc535cf82f44643c-runc.PthqYz.mount: Deactivated successfully. Oct 9 07:45:23.680111 sshd[5589]: Accepted publickey for core from 172.24.4.1 port 58650 ssh2: RSA SHA256:iTqmmSA9RcIkWmF7myyFAqWL8kdaKMdVpBUk8UaNQPM Oct 9 07:45:23.684505 sshd[5589]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:45:23.695890 systemd-logind[1543]: New session 23 of user core. Oct 9 07:45:23.702468 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 9 07:45:24.886722 sshd[5589]: pam_unix(sshd:session): session closed for user core Oct 9 07:45:24.900714 systemd[1]: sshd@20-172.24.4.70:22-172.24.4.1:58650.service: Deactivated successfully. Oct 9 07:45:24.907469 systemd[1]: session-23.scope: Deactivated successfully. Oct 9 07:45:24.907891 systemd-logind[1543]: Session 23 logged out. Waiting for processes to exit. Oct 9 07:45:24.912242 systemd-logind[1543]: Removed session 23. Oct 9 07:45:29.897202 systemd[1]: Started sshd@21-172.24.4.70:22-172.24.4.1:40726.service - OpenSSH per-connection server daemon (172.24.4.1:40726). Oct 9 07:45:31.020906 sshd[5655]: Accepted publickey for core from 172.24.4.1 port 40726 ssh2: RSA SHA256:iTqmmSA9RcIkWmF7myyFAqWL8kdaKMdVpBUk8UaNQPM Oct 9 07:45:31.024942 sshd[5655]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:45:31.036822 systemd-logind[1543]: New session 24 of user core. Oct 9 07:45:31.042135 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 9 07:45:31.762792 sshd[5655]: pam_unix(sshd:session): session closed for user core Oct 9 07:45:31.767977 systemd[1]: sshd@21-172.24.4.70:22-172.24.4.1:40726.service: Deactivated successfully. Oct 9 07:45:31.769737 systemd-logind[1543]: Session 24 logged out. Waiting for processes to exit. Oct 9 07:45:31.771283 systemd[1]: session-24.scope: Deactivated successfully. Oct 9 07:45:31.772817 systemd-logind[1543]: Removed session 24. Oct 9 07:45:36.775700 systemd[1]: Started sshd@22-172.24.4.70:22-172.24.4.1:50518.service - OpenSSH per-connection server daemon (172.24.4.1:50518). Oct 9 07:45:38.002967 sshd[5675]: Accepted publickey for core from 172.24.4.1 port 50518 ssh2: RSA SHA256:iTqmmSA9RcIkWmF7myyFAqWL8kdaKMdVpBUk8UaNQPM Oct 9 07:45:38.006305 sshd[5675]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:45:38.020051 systemd-logind[1543]: New session 25 of user core. Oct 9 07:45:38.030123 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 9 07:45:38.799235 sshd[5675]: pam_unix(sshd:session): session closed for user core Oct 9 07:45:38.805229 systemd[1]: sshd@22-172.24.4.70:22-172.24.4.1:50518.service: Deactivated successfully. Oct 9 07:45:38.813001 systemd-logind[1543]: Session 25 logged out. Waiting for processes to exit. Oct 9 07:45:38.814376 systemd[1]: session-25.scope: Deactivated successfully. Oct 9 07:45:38.818242 systemd-logind[1543]: Removed session 25. Oct 9 07:45:43.811356 systemd[1]: Started sshd@23-172.24.4.70:22-172.24.4.1:50528.service - OpenSSH per-connection server daemon (172.24.4.1:50528). Oct 9 07:45:44.900818 sshd[5705]: Accepted publickey for core from 172.24.4.1 port 50528 ssh2: RSA SHA256:iTqmmSA9RcIkWmF7myyFAqWL8kdaKMdVpBUk8UaNQPM Oct 9 07:45:44.905709 sshd[5705]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:45:44.917577 systemd-logind[1543]: New session 26 of user core. Oct 9 07:45:44.926239 systemd[1]: Started session-26.scope - Session 26 of User core. Oct 9 07:45:45.705966 sshd[5705]: pam_unix(sshd:session): session closed for user core Oct 9 07:45:45.710597 systemd[1]: sshd@23-172.24.4.70:22-172.24.4.1:50528.service: Deactivated successfully. Oct 9 07:45:45.714948 systemd-logind[1543]: Session 26 logged out. Waiting for processes to exit. Oct 9 07:45:45.716243 systemd[1]: session-26.scope: Deactivated successfully. Oct 9 07:45:45.717664 systemd-logind[1543]: Removed session 26. Oct 9 07:45:50.725137 systemd[1]: Started sshd@24-172.24.4.70:22-172.24.4.1:49798.service - OpenSSH per-connection server daemon (172.24.4.1:49798). Oct 9 07:45:51.893449 sshd[5722]: Accepted publickey for core from 172.24.4.1 port 49798 ssh2: RSA SHA256:iTqmmSA9RcIkWmF7myyFAqWL8kdaKMdVpBUk8UaNQPM Oct 9 07:45:51.896506 sshd[5722]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:45:51.907957 systemd-logind[1543]: New session 27 of user core. Oct 9 07:45:51.917211 systemd[1]: Started session-27.scope - Session 27 of User core. Oct 9 07:45:52.706654 sshd[5722]: pam_unix(sshd:session): session closed for user core Oct 9 07:45:52.715584 systemd[1]: sshd@24-172.24.4.70:22-172.24.4.1:49798.service: Deactivated successfully. Oct 9 07:45:52.722617 systemd[1]: session-27.scope: Deactivated successfully. Oct 9 07:45:52.724515 systemd-logind[1543]: Session 27 logged out. Waiting for processes to exit. Oct 9 07:45:52.727289 systemd-logind[1543]: Removed session 27.