Oct 8 20:19:02.971295 kernel: Linux version 6.6.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Oct 8 18:24:27 -00 2024 Oct 8 20:19:02.971387 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=ed527eaf992abc270af9987554566193214d123941456fd3066b47855e5178a5 Oct 8 20:19:02.971417 kernel: BIOS-provided physical RAM map: Oct 8 20:19:02.971436 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Oct 8 20:19:02.971453 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Oct 8 20:19:02.971471 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Oct 8 20:19:02.971492 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Oct 8 20:19:02.971510 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Oct 8 20:19:02.971528 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 8 20:19:02.971549 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Oct 8 20:19:02.971568 kernel: NX (Execute Disable) protection: active Oct 8 20:19:02.971586 kernel: APIC: Static calls initialized Oct 8 20:19:02.971603 kernel: SMBIOS 2.8 present. Oct 8 20:19:02.971622 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014 Oct 8 20:19:02.971644 kernel: Hypervisor detected: KVM Oct 8 20:19:02.971667 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 8 20:19:02.971686 kernel: kvm-clock: using sched offset of 4661595759 cycles Oct 8 20:19:02.971706 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 8 20:19:02.971726 kernel: tsc: Detected 1996.249 MHz processor Oct 8 20:19:02.971746 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 8 20:19:02.971766 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 8 20:19:02.971786 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Oct 8 20:19:02.971805 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Oct 8 20:19:02.971825 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 8 20:19:02.971848 kernel: ACPI: Early table checksum verification disabled Oct 8 20:19:02.971868 kernel: ACPI: RSDP 0x00000000000F5930 000014 (v00 BOCHS ) Oct 8 20:19:02.971887 kernel: ACPI: RSDT 0x000000007FFE1848 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 20:19:02.971906 kernel: ACPI: FACP 0x000000007FFE172C 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 20:19:02.973961 kernel: ACPI: DSDT 0x000000007FFE0040 0016EC (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 20:19:02.973997 kernel: ACPI: FACS 0x000000007FFE0000 000040 Oct 8 20:19:02.974018 kernel: ACPI: APIC 0x000000007FFE17A0 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 20:19:02.974039 kernel: ACPI: WAET 0x000000007FFE1820 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 20:19:02.974059 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe172c-0x7ffe179f] Oct 8 20:19:02.974085 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe172b] Oct 8 20:19:02.974105 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Oct 8 20:19:02.974124 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17a0-0x7ffe181f] Oct 8 20:19:02.974144 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe1820-0x7ffe1847] Oct 8 20:19:02.974163 kernel: No NUMA configuration found Oct 8 20:19:02.974183 kernel: Faking a node at [mem 0x0000000000000000-0x000000007ffdcfff] Oct 8 20:19:02.974203 kernel: NODE_DATA(0) allocated [mem 0x7ffd7000-0x7ffdcfff] Oct 8 20:19:02.974230 kernel: Zone ranges: Oct 8 20:19:02.974254 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 8 20:19:02.974274 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdcfff] Oct 8 20:19:02.974295 kernel: Normal empty Oct 8 20:19:02.974315 kernel: Movable zone start for each node Oct 8 20:19:02.974335 kernel: Early memory node ranges Oct 8 20:19:02.974355 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Oct 8 20:19:02.974379 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Oct 8 20:19:02.974399 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdcfff] Oct 8 20:19:02.974420 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 8 20:19:02.974440 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Oct 8 20:19:02.974460 kernel: On node 0, zone DMA32: 35 pages in unavailable ranges Oct 8 20:19:02.974480 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 8 20:19:02.974500 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 8 20:19:02.974521 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 8 20:19:02.974541 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 8 20:19:02.974565 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 8 20:19:02.974586 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 8 20:19:02.974606 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 8 20:19:02.974626 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 8 20:19:02.974646 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 8 20:19:02.974666 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Oct 8 20:19:02.974687 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Oct 8 20:19:02.974707 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Oct 8 20:19:02.974727 kernel: Booting paravirtualized kernel on KVM Oct 8 20:19:02.974748 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 8 20:19:02.974773 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Oct 8 20:19:02.974793 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576 Oct 8 20:19:02.974814 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152 Oct 8 20:19:02.974834 kernel: pcpu-alloc: [0] 0 1 Oct 8 20:19:02.974854 kernel: kvm-guest: PV spinlocks disabled, no host support Oct 8 20:19:02.974877 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=ed527eaf992abc270af9987554566193214d123941456fd3066b47855e5178a5 Oct 8 20:19:02.974899 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 8 20:19:02.974953 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 8 20:19:02.974977 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Oct 8 20:19:02.974997 kernel: Fallback order for Node 0: 0 Oct 8 20:19:02.975018 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515805 Oct 8 20:19:02.975039 kernel: Policy zone: DMA32 Oct 8 20:19:02.975059 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 8 20:19:02.975080 kernel: Memory: 1971212K/2096620K available (12288K kernel code, 2305K rwdata, 22716K rodata, 42828K init, 2360K bss, 125148K reserved, 0K cma-reserved) Oct 8 20:19:02.975100 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Oct 8 20:19:02.975121 kernel: ftrace: allocating 37784 entries in 148 pages Oct 8 20:19:02.975146 kernel: ftrace: allocated 148 pages with 3 groups Oct 8 20:19:02.975166 kernel: Dynamic Preempt: voluntary Oct 8 20:19:02.975187 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 8 20:19:02.975208 kernel: rcu: RCU event tracing is enabled. Oct 8 20:19:02.975229 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Oct 8 20:19:02.975250 kernel: Trampoline variant of Tasks RCU enabled. Oct 8 20:19:02.975271 kernel: Rude variant of Tasks RCU enabled. Oct 8 20:19:02.975291 kernel: Tracing variant of Tasks RCU enabled. Oct 8 20:19:02.975311 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 8 20:19:02.975336 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Oct 8 20:19:02.975356 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Oct 8 20:19:02.975377 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 8 20:19:02.975397 kernel: Console: colour VGA+ 80x25 Oct 8 20:19:02.975417 kernel: printk: console [tty0] enabled Oct 8 20:19:02.975437 kernel: printk: console [ttyS0] enabled Oct 8 20:19:02.975457 kernel: ACPI: Core revision 20230628 Oct 8 20:19:02.975478 kernel: APIC: Switch to symmetric I/O mode setup Oct 8 20:19:02.975498 kernel: x2apic enabled Oct 8 20:19:02.975519 kernel: APIC: Switched APIC routing to: physical x2apic Oct 8 20:19:02.975543 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 8 20:19:02.975564 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Oct 8 20:19:02.975584 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Oct 8 20:19:02.975605 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Oct 8 20:19:02.975625 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Oct 8 20:19:02.975646 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 8 20:19:02.975666 kernel: Spectre V2 : Mitigation: Retpolines Oct 8 20:19:02.975686 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Oct 8 20:19:02.975707 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Oct 8 20:19:02.975731 kernel: Speculative Store Bypass: Vulnerable Oct 8 20:19:02.975752 kernel: x86/fpu: x87 FPU will use FXSAVE Oct 8 20:19:02.975772 kernel: Freeing SMP alternatives memory: 32K Oct 8 20:19:02.975792 kernel: pid_max: default: 32768 minimum: 301 Oct 8 20:19:02.975812 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Oct 8 20:19:02.975832 kernel: landlock: Up and running. Oct 8 20:19:02.975852 kernel: SELinux: Initializing. Oct 8 20:19:02.975873 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 8 20:19:02.975910 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 8 20:19:02.977971 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Oct 8 20:19:02.978000 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 8 20:19:02.978028 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 8 20:19:02.978050 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 8 20:19:02.978072 kernel: Performance Events: AMD PMU driver. Oct 8 20:19:02.978093 kernel: ... version: 0 Oct 8 20:19:02.978115 kernel: ... bit width: 48 Oct 8 20:19:02.978140 kernel: ... generic registers: 4 Oct 8 20:19:02.978162 kernel: ... value mask: 0000ffffffffffff Oct 8 20:19:02.978183 kernel: ... max period: 00007fffffffffff Oct 8 20:19:02.978205 kernel: ... fixed-purpose events: 0 Oct 8 20:19:02.978226 kernel: ... event mask: 000000000000000f Oct 8 20:19:02.978247 kernel: signal: max sigframe size: 1440 Oct 8 20:19:02.978268 kernel: rcu: Hierarchical SRCU implementation. Oct 8 20:19:02.978290 kernel: rcu: Max phase no-delay instances is 400. Oct 8 20:19:02.978311 kernel: smp: Bringing up secondary CPUs ... Oct 8 20:19:02.978336 kernel: smpboot: x86: Booting SMP configuration: Oct 8 20:19:02.978358 kernel: .... node #0, CPUs: #1 Oct 8 20:19:02.978379 kernel: smp: Brought up 1 node, 2 CPUs Oct 8 20:19:02.978399 kernel: smpboot: Max logical packages: 2 Oct 8 20:19:02.978421 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Oct 8 20:19:02.978442 kernel: devtmpfs: initialized Oct 8 20:19:02.978463 kernel: x86/mm: Memory block size: 128MB Oct 8 20:19:02.978484 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 8 20:19:02.978506 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Oct 8 20:19:02.978527 kernel: pinctrl core: initialized pinctrl subsystem Oct 8 20:19:02.978553 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 8 20:19:02.978574 kernel: audit: initializing netlink subsys (disabled) Oct 8 20:19:02.978596 kernel: audit: type=2000 audit(1728418742.361:1): state=initialized audit_enabled=0 res=1 Oct 8 20:19:02.978617 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 8 20:19:02.978638 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 8 20:19:02.978659 kernel: cpuidle: using governor menu Oct 8 20:19:02.978681 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 8 20:19:02.978702 kernel: dca service started, version 1.12.1 Oct 8 20:19:02.978723 kernel: PCI: Using configuration type 1 for base access Oct 8 20:19:02.978749 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 8 20:19:02.978770 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 8 20:19:02.978792 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 8 20:19:02.978813 kernel: ACPI: Added _OSI(Module Device) Oct 8 20:19:02.978834 kernel: ACPI: Added _OSI(Processor Device) Oct 8 20:19:02.978855 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 8 20:19:02.978876 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 8 20:19:02.978897 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 8 20:19:02.978918 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Oct 8 20:19:02.978977 kernel: ACPI: Interpreter enabled Oct 8 20:19:02.978999 kernel: ACPI: PM: (supports S0 S3 S5) Oct 8 20:19:02.979020 kernel: ACPI: Using IOAPIC for interrupt routing Oct 8 20:19:02.979042 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 8 20:19:02.979063 kernel: PCI: Using E820 reservations for host bridge windows Oct 8 20:19:02.979084 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Oct 8 20:19:02.979105 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 8 20:19:02.979470 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Oct 8 20:19:02.979721 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Oct 8 20:19:02.982013 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Oct 8 20:19:02.982084 kernel: acpiphp: Slot [3] registered Oct 8 20:19:02.982122 kernel: acpiphp: Slot [4] registered Oct 8 20:19:02.982156 kernel: acpiphp: Slot [5] registered Oct 8 20:19:02.982181 kernel: acpiphp: Slot [6] registered Oct 8 20:19:02.982202 kernel: acpiphp: Slot [7] registered Oct 8 20:19:02.982223 kernel: acpiphp: Slot [8] registered Oct 8 20:19:02.982255 kernel: acpiphp: Slot [9] registered Oct 8 20:19:02.982276 kernel: acpiphp: Slot [10] registered Oct 8 20:19:02.982298 kernel: acpiphp: Slot [11] registered Oct 8 20:19:02.982319 kernel: acpiphp: Slot [12] registered Oct 8 20:19:02.982340 kernel: acpiphp: Slot [13] registered Oct 8 20:19:02.982361 kernel: acpiphp: Slot [14] registered Oct 8 20:19:02.982382 kernel: acpiphp: Slot [15] registered Oct 8 20:19:02.982402 kernel: acpiphp: Slot [16] registered Oct 8 20:19:02.982423 kernel: acpiphp: Slot [17] registered Oct 8 20:19:02.982448 kernel: acpiphp: Slot [18] registered Oct 8 20:19:02.982469 kernel: acpiphp: Slot [19] registered Oct 8 20:19:02.982490 kernel: acpiphp: Slot [20] registered Oct 8 20:19:02.982511 kernel: acpiphp: Slot [21] registered Oct 8 20:19:02.982531 kernel: acpiphp: Slot [22] registered Oct 8 20:19:02.982552 kernel: acpiphp: Slot [23] registered Oct 8 20:19:02.982573 kernel: acpiphp: Slot [24] registered Oct 8 20:19:02.982594 kernel: acpiphp: Slot [25] registered Oct 8 20:19:02.982615 kernel: acpiphp: Slot [26] registered Oct 8 20:19:02.982637 kernel: acpiphp: Slot [27] registered Oct 8 20:19:02.982662 kernel: acpiphp: Slot [28] registered Oct 8 20:19:02.982683 kernel: acpiphp: Slot [29] registered Oct 8 20:19:02.982704 kernel: acpiphp: Slot [30] registered Oct 8 20:19:02.982725 kernel: acpiphp: Slot [31] registered Oct 8 20:19:02.982746 kernel: PCI host bridge to bus 0000:00 Oct 8 20:19:02.983052 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 8 20:19:02.983269 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 8 20:19:02.983562 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 8 20:19:02.983876 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Oct 8 20:19:02.986259 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Oct 8 20:19:02.986566 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 8 20:19:02.986857 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Oct 8 20:19:02.987161 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Oct 8 20:19:02.987394 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Oct 8 20:19:02.987629 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Oct 8 20:19:02.987845 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Oct 8 20:19:02.990183 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Oct 8 20:19:02.990478 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Oct 8 20:19:02.990758 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Oct 8 20:19:02.991161 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Oct 8 20:19:02.991434 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Oct 8 20:19:02.991724 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Oct 8 20:19:02.995625 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Oct 8 20:19:02.995778 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Oct 8 20:19:02.995885 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Oct 8 20:19:02.996021 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Oct 8 20:19:02.996128 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Oct 8 20:19:02.996259 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 8 20:19:02.996395 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Oct 8 20:19:02.996501 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Oct 8 20:19:02.996608 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Oct 8 20:19:02.996715 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Oct 8 20:19:02.996820 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Oct 8 20:19:02.996966 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Oct 8 20:19:02.997083 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Oct 8 20:19:02.997187 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Oct 8 20:19:02.997288 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Oct 8 20:19:02.997400 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Oct 8 20:19:02.997536 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Oct 8 20:19:02.997645 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Oct 8 20:19:02.997761 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Oct 8 20:19:02.997893 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Oct 8 20:19:02.998027 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Oct 8 20:19:02.998045 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 8 20:19:02.998056 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 8 20:19:02.998068 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 8 20:19:02.998079 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 8 20:19:02.998090 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Oct 8 20:19:02.998101 kernel: iommu: Default domain type: Translated Oct 8 20:19:02.998112 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 8 20:19:02.998128 kernel: PCI: Using ACPI for IRQ routing Oct 8 20:19:02.998139 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 8 20:19:02.998150 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Oct 8 20:19:02.998161 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Oct 8 20:19:02.998277 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Oct 8 20:19:02.998382 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Oct 8 20:19:02.998488 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 8 20:19:02.998504 kernel: vgaarb: loaded Oct 8 20:19:02.998519 kernel: clocksource: Switched to clocksource kvm-clock Oct 8 20:19:02.998534 kernel: VFS: Disk quotas dquot_6.6.0 Oct 8 20:19:02.998550 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 8 20:19:02.998565 kernel: pnp: PnP ACPI init Oct 8 20:19:02.998701 kernel: pnp 00:03: [dma 2] Oct 8 20:19:02.998728 kernel: pnp: PnP ACPI: found 5 devices Oct 8 20:19:02.998740 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 8 20:19:02.998751 kernel: NET: Registered PF_INET protocol family Oct 8 20:19:02.998762 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 8 20:19:02.998778 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Oct 8 20:19:02.998789 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 8 20:19:02.998800 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Oct 8 20:19:02.998811 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Oct 8 20:19:02.998822 kernel: TCP: Hash tables configured (established 16384 bind 16384) Oct 8 20:19:02.998833 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 8 20:19:02.998844 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 8 20:19:02.998855 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 8 20:19:02.998866 kernel: NET: Registered PF_XDP protocol family Oct 8 20:19:02.998996 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 8 20:19:02.999091 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 8 20:19:02.999181 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 8 20:19:02.999273 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Oct 8 20:19:02.999363 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Oct 8 20:19:02.999469 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Oct 8 20:19:02.999577 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Oct 8 20:19:02.999598 kernel: PCI: CLS 0 bytes, default 64 Oct 8 20:19:02.999609 kernel: Initialise system trusted keyrings Oct 8 20:19:02.999620 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Oct 8 20:19:02.999631 kernel: Key type asymmetric registered Oct 8 20:19:02.999642 kernel: Asymmetric key parser 'x509' registered Oct 8 20:19:02.999653 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Oct 8 20:19:02.999663 kernel: io scheduler mq-deadline registered Oct 8 20:19:02.999674 kernel: io scheduler kyber registered Oct 8 20:19:02.999685 kernel: io scheduler bfq registered Oct 8 20:19:02.999699 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 8 20:19:02.999711 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Oct 8 20:19:02.999722 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Oct 8 20:19:02.999733 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Oct 8 20:19:02.999744 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Oct 8 20:19:02.999757 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 8 20:19:02.999772 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 8 20:19:02.999789 kernel: random: crng init done Oct 8 20:19:02.999805 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 8 20:19:02.999817 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 8 20:19:02.999832 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 8 20:19:02.999843 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 8 20:19:03.003481 kernel: rtc_cmos 00:04: RTC can wake from S4 Oct 8 20:19:03.003703 kernel: rtc_cmos 00:04: registered as rtc0 Oct 8 20:19:03.003799 kernel: rtc_cmos 00:04: setting system clock to 2024-10-08T20:19:02 UTC (1728418742) Oct 8 20:19:03.003899 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Oct 8 20:19:03.003914 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Oct 8 20:19:03.006067 kernel: NET: Registered PF_INET6 protocol family Oct 8 20:19:03.006104 kernel: Segment Routing with IPv6 Oct 8 20:19:03.006115 kernel: In-situ OAM (IOAM) with IPv6 Oct 8 20:19:03.006125 kernel: NET: Registered PF_PACKET protocol family Oct 8 20:19:03.006136 kernel: Key type dns_resolver registered Oct 8 20:19:03.006145 kernel: IPI shorthand broadcast: enabled Oct 8 20:19:03.006155 kernel: sched_clock: Marking stable (948008877, 123822210)->(1085329526, -13498439) Oct 8 20:19:03.006165 kernel: registered taskstats version 1 Oct 8 20:19:03.006175 kernel: Loading compiled-in X.509 certificates Oct 8 20:19:03.006189 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.54-flatcar: 14ce23fc5070d0471461f1dd6e298a5588e7ba8f' Oct 8 20:19:03.006199 kernel: Key type .fscrypt registered Oct 8 20:19:03.006208 kernel: Key type fscrypt-provisioning registered Oct 8 20:19:03.006218 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 8 20:19:03.006227 kernel: ima: Allocated hash algorithm: sha1 Oct 8 20:19:03.006237 kernel: ima: No architecture policies found Oct 8 20:19:03.006247 kernel: clk: Disabling unused clocks Oct 8 20:19:03.006256 kernel: Freeing unused kernel image (initmem) memory: 42828K Oct 8 20:19:03.006266 kernel: Write protecting the kernel read-only data: 36864k Oct 8 20:19:03.006278 kernel: Freeing unused kernel image (rodata/data gap) memory: 1860K Oct 8 20:19:03.006288 kernel: Run /init as init process Oct 8 20:19:03.006297 kernel: with arguments: Oct 8 20:19:03.006307 kernel: /init Oct 8 20:19:03.006316 kernel: with environment: Oct 8 20:19:03.006326 kernel: HOME=/ Oct 8 20:19:03.006335 kernel: TERM=linux Oct 8 20:19:03.006344 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 8 20:19:03.006362 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 8 20:19:03.006378 systemd[1]: Detected virtualization kvm. Oct 8 20:19:03.006389 systemd[1]: Detected architecture x86-64. Oct 8 20:19:03.006399 systemd[1]: Running in initrd. Oct 8 20:19:03.006409 systemd[1]: No hostname configured, using default hostname. Oct 8 20:19:03.006420 systemd[1]: Hostname set to . Oct 8 20:19:03.006430 systemd[1]: Initializing machine ID from VM UUID. Oct 8 20:19:03.006441 systemd[1]: Queued start job for default target initrd.target. Oct 8 20:19:03.006453 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 20:19:03.006464 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 20:19:03.006484 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 8 20:19:03.006496 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 8 20:19:03.006506 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 8 20:19:03.006517 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 8 20:19:03.006529 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 8 20:19:03.006542 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 8 20:19:03.006553 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 20:19:03.006564 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 8 20:19:03.006574 systemd[1]: Reached target paths.target - Path Units. Oct 8 20:19:03.006594 systemd[1]: Reached target slices.target - Slice Units. Oct 8 20:19:03.006611 systemd[1]: Reached target swap.target - Swaps. Oct 8 20:19:03.006625 systemd[1]: Reached target timers.target - Timer Units. Oct 8 20:19:03.006635 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 8 20:19:03.006646 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 8 20:19:03.006657 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 8 20:19:03.006668 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 8 20:19:03.006681 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 8 20:19:03.006693 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 8 20:19:03.006703 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 20:19:03.006716 systemd[1]: Reached target sockets.target - Socket Units. Oct 8 20:19:03.006727 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 8 20:19:03.006737 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 8 20:19:03.006748 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 8 20:19:03.006759 systemd[1]: Starting systemd-fsck-usr.service... Oct 8 20:19:03.006769 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 8 20:19:03.006780 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 8 20:19:03.006791 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 20:19:03.006873 systemd-journald[183]: Collecting audit messages is disabled. Oct 8 20:19:03.006905 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 8 20:19:03.006917 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 20:19:03.006949 systemd[1]: Finished systemd-fsck-usr.service. Oct 8 20:19:03.006965 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 8 20:19:03.007472 systemd-journald[183]: Journal started Oct 8 20:19:03.007533 systemd-journald[183]: Runtime Journal (/run/log/journal/e4b962570e9f4a628291dd8987ded63c) is 4.9M, max 39.3M, 34.4M free. Oct 8 20:19:03.005421 systemd-modules-load[184]: Inserted module 'overlay' Oct 8 20:19:03.041841 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 8 20:19:03.044446 kernel: Bridge firewalling registered Oct 8 20:19:03.044186 systemd-modules-load[184]: Inserted module 'br_netfilter' Oct 8 20:19:03.047263 systemd[1]: Started systemd-journald.service - Journal Service. Oct 8 20:19:03.049514 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 8 20:19:03.052259 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 20:19:03.053522 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 8 20:19:03.061105 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 20:19:03.063051 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 8 20:19:03.067253 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 8 20:19:03.069148 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 8 20:19:03.083284 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 8 20:19:03.087114 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 20:19:03.091637 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 8 20:19:03.100129 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 8 20:19:03.102164 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 20:19:03.104720 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 8 20:19:03.124780 dracut-cmdline[219]: dracut-dracut-053 Oct 8 20:19:03.128395 dracut-cmdline[219]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=ed527eaf992abc270af9987554566193214d123941456fd3066b47855e5178a5 Oct 8 20:19:03.132433 systemd-resolved[215]: Positive Trust Anchors: Oct 8 20:19:03.132460 systemd-resolved[215]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 8 20:19:03.132508 systemd-resolved[215]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 8 20:19:03.136397 systemd-resolved[215]: Defaulting to hostname 'linux'. Oct 8 20:19:03.138507 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 8 20:19:03.140019 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 8 20:19:03.226977 kernel: SCSI subsystem initialized Oct 8 20:19:03.238953 kernel: Loading iSCSI transport class v2.0-870. Oct 8 20:19:03.252976 kernel: iscsi: registered transport (tcp) Oct 8 20:19:03.278049 kernel: iscsi: registered transport (qla4xxx) Oct 8 20:19:03.278167 kernel: QLogic iSCSI HBA Driver Oct 8 20:19:03.346004 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 8 20:19:03.355154 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 8 20:19:03.412350 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 8 20:19:03.412508 kernel: device-mapper: uevent: version 1.0.3 Oct 8 20:19:03.412548 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Oct 8 20:19:03.461056 kernel: raid6: sse2x4 gen() 12147 MB/s Oct 8 20:19:03.479053 kernel: raid6: sse2x2 gen() 13498 MB/s Oct 8 20:19:03.496274 kernel: raid6: sse2x1 gen() 9155 MB/s Oct 8 20:19:03.496460 kernel: raid6: using algorithm sse2x2 gen() 13498 MB/s Oct 8 20:19:03.514313 kernel: raid6: .... xor() 8823 MB/s, rmw enabled Oct 8 20:19:03.514430 kernel: raid6: using ssse3x2 recovery algorithm Oct 8 20:19:03.538319 kernel: xor: measuring software checksum speed Oct 8 20:19:03.538845 kernel: prefetch64-sse : 17268 MB/sec Oct 8 20:19:03.540233 kernel: generic_sse : 14854 MB/sec Oct 8 20:19:03.540320 kernel: xor: using function: prefetch64-sse (17268 MB/sec) Oct 8 20:19:03.730032 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 8 20:19:03.744480 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 8 20:19:03.752115 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 20:19:03.795122 systemd-udevd[401]: Using default interface naming scheme 'v255'. Oct 8 20:19:03.806517 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 20:19:03.816237 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 8 20:19:03.845238 dracut-pre-trigger[410]: rd.md=0: removing MD RAID activation Oct 8 20:19:03.876596 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 8 20:19:03.883161 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 8 20:19:03.943177 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 20:19:03.949130 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 8 20:19:03.976306 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 8 20:19:03.977736 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 8 20:19:03.979285 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 20:19:03.981263 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 8 20:19:03.989168 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 8 20:19:04.007427 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 8 20:19:04.034952 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues Oct 8 20:19:04.046116 kernel: virtio_blk virtio2: [vda] 41943040 512-byte logical blocks (21.5 GB/20.0 GiB) Oct 8 20:19:04.059453 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 8 20:19:04.059599 kernel: GPT:17805311 != 41943039 Oct 8 20:19:04.059614 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 8 20:19:04.060630 kernel: GPT:17805311 != 41943039 Oct 8 20:19:04.066191 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 8 20:19:04.066251 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 8 20:19:04.073463 kernel: libata version 3.00 loaded. Oct 8 20:19:04.070723 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 8 20:19:04.071230 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 20:19:04.076764 kernel: ata_piix 0000:00:01.1: version 2.13 Oct 8 20:19:04.072168 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 20:19:04.072762 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 8 20:19:04.072913 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 20:19:04.076008 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 20:19:04.083968 kernel: scsi host0: ata_piix Oct 8 20:19:04.085787 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 20:19:04.088958 kernel: scsi host1: ata_piix Oct 8 20:19:04.097765 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Oct 8 20:19:04.098160 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Oct 8 20:19:04.116056 kernel: BTRFS: device fsid a8680da2-059a-4648-a8e8-f62925ab33ec devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (458) Oct 8 20:19:04.119962 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by (udev-worker) (457) Oct 8 20:19:04.137994 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 8 20:19:04.174982 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 20:19:04.191417 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 8 20:19:04.219363 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 8 20:19:04.231344 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Oct 8 20:19:04.232679 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 8 20:19:04.244172 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 8 20:19:04.249135 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 20:19:04.262339 disk-uuid[500]: Primary Header is updated. Oct 8 20:19:04.262339 disk-uuid[500]: Secondary Entries is updated. Oct 8 20:19:04.262339 disk-uuid[500]: Secondary Header is updated. Oct 8 20:19:04.273953 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 8 20:19:04.287446 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 20:19:04.293915 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 8 20:19:05.307609 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 8 20:19:05.307728 disk-uuid[501]: The operation has completed successfully. Oct 8 20:19:05.383133 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 8 20:19:05.383631 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 8 20:19:05.435111 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 8 20:19:05.439893 sh[525]: Success Oct 8 20:19:05.459953 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Oct 8 20:19:05.524864 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 8 20:19:05.536494 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 8 20:19:05.539651 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 8 20:19:05.578137 kernel: BTRFS info (device dm-0): first mount of filesystem a8680da2-059a-4648-a8e8-f62925ab33ec Oct 8 20:19:05.578269 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 8 20:19:05.581969 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Oct 8 20:19:05.585326 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 8 20:19:05.588071 kernel: BTRFS info (device dm-0): using free space tree Oct 8 20:19:05.601614 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 8 20:19:05.602598 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 8 20:19:05.612095 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 8 20:19:05.615083 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 8 20:19:05.630227 kernel: BTRFS info (device vda6): first mount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 8 20:19:05.630337 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 8 20:19:05.632110 kernel: BTRFS info (device vda6): using free space tree Oct 8 20:19:05.636967 kernel: BTRFS info (device vda6): auto enabling async discard Oct 8 20:19:05.651152 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 8 20:19:05.652334 kernel: BTRFS info (device vda6): last unmount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 8 20:19:05.669341 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 8 20:19:05.677184 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 8 20:19:05.752763 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 8 20:19:05.761164 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 8 20:19:05.782860 systemd-networkd[707]: lo: Link UP Oct 8 20:19:05.782869 systemd-networkd[707]: lo: Gained carrier Oct 8 20:19:05.784162 systemd-networkd[707]: Enumeration completed Oct 8 20:19:05.784261 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 8 20:19:05.784841 systemd-networkd[707]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 20:19:05.784845 systemd-networkd[707]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 8 20:19:05.784869 systemd[1]: Reached target network.target - Network. Oct 8 20:19:05.786252 systemd-networkd[707]: eth0: Link UP Oct 8 20:19:05.786256 systemd-networkd[707]: eth0: Gained carrier Oct 8 20:19:05.786263 systemd-networkd[707]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 20:19:05.803999 systemd-networkd[707]: eth0: DHCPv4 address 172.24.4.55/24, gateway 172.24.4.1 acquired from 172.24.4.1 Oct 8 20:19:05.844914 ignition[620]: Ignition 2.19.0 Oct 8 20:19:05.844950 ignition[620]: Stage: fetch-offline Oct 8 20:19:05.845043 ignition[620]: no configs at "/usr/lib/ignition/base.d" Oct 8 20:19:05.845053 ignition[620]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 8 20:19:05.845242 ignition[620]: parsed url from cmdline: "" Oct 8 20:19:05.845247 ignition[620]: no config URL provided Oct 8 20:19:05.848368 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 8 20:19:05.845253 ignition[620]: reading system config file "/usr/lib/ignition/user.ign" Oct 8 20:19:05.845261 ignition[620]: no config at "/usr/lib/ignition/user.ign" Oct 8 20:19:05.845266 ignition[620]: failed to fetch config: resource requires networking Oct 8 20:19:05.846880 ignition[620]: Ignition finished successfully Oct 8 20:19:05.855142 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Oct 8 20:19:05.869565 ignition[718]: Ignition 2.19.0 Oct 8 20:19:05.869578 ignition[718]: Stage: fetch Oct 8 20:19:05.869820 ignition[718]: no configs at "/usr/lib/ignition/base.d" Oct 8 20:19:05.869835 ignition[718]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 8 20:19:05.869978 ignition[718]: parsed url from cmdline: "" Oct 8 20:19:05.869983 ignition[718]: no config URL provided Oct 8 20:19:05.869989 ignition[718]: reading system config file "/usr/lib/ignition/user.ign" Oct 8 20:19:05.869999 ignition[718]: no config at "/usr/lib/ignition/user.ign" Oct 8 20:19:05.870129 ignition[718]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Oct 8 20:19:05.870216 ignition[718]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Oct 8 20:19:05.870245 ignition[718]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Oct 8 20:19:06.189134 ignition[718]: GET result: OK Oct 8 20:19:06.189899 ignition[718]: parsing config with SHA512: 85b6e9f2e2ece732fc237b516f09431ca32166961024af0ac0e9c7491e140aec892450067388effc980560f90028593a1d267fed2fb3a234a0951043d4693c3c Oct 8 20:19:06.199884 unknown[718]: fetched base config from "system" Oct 8 20:19:06.199913 unknown[718]: fetched base config from "system" Oct 8 20:19:06.201125 ignition[718]: fetch: fetch complete Oct 8 20:19:06.199970 unknown[718]: fetched user config from "openstack" Oct 8 20:19:06.201137 ignition[718]: fetch: fetch passed Oct 8 20:19:06.204600 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Oct 8 20:19:06.201229 ignition[718]: Ignition finished successfully Oct 8 20:19:06.215297 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 8 20:19:06.261692 ignition[724]: Ignition 2.19.0 Oct 8 20:19:06.263273 ignition[724]: Stage: kargs Oct 8 20:19:06.264695 ignition[724]: no configs at "/usr/lib/ignition/base.d" Oct 8 20:19:06.264728 ignition[724]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 8 20:19:06.266890 ignition[724]: kargs: kargs passed Oct 8 20:19:06.268763 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 8 20:19:06.267015 ignition[724]: Ignition finished successfully Oct 8 20:19:06.279311 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 8 20:19:06.306786 ignition[731]: Ignition 2.19.0 Oct 8 20:19:06.306814 ignition[731]: Stage: disks Oct 8 20:19:06.307306 ignition[731]: no configs at "/usr/lib/ignition/base.d" Oct 8 20:19:06.307328 ignition[731]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 8 20:19:06.311211 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 8 20:19:06.309791 ignition[731]: disks: disks passed Oct 8 20:19:06.313538 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 8 20:19:06.309839 ignition[731]: Ignition finished successfully Oct 8 20:19:06.314708 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 8 20:19:06.316517 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 8 20:19:06.318101 systemd[1]: Reached target sysinit.target - System Initialization. Oct 8 20:19:06.320114 systemd[1]: Reached target basic.target - Basic System. Oct 8 20:19:06.329103 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 8 20:19:06.354468 systemd-fsck[739]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Oct 8 20:19:06.367235 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 8 20:19:06.375119 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 8 20:19:06.576012 kernel: EXT4-fs (vda9): mounted filesystem 1df90f14-3ad0-4280-9b7d-a34f65d70e4d r/w with ordered data mode. Quota mode: none. Oct 8 20:19:06.579019 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 8 20:19:06.581474 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 8 20:19:06.595111 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 8 20:19:06.600079 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 8 20:19:06.600935 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 8 20:19:06.605131 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Oct 8 20:19:06.606641 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 8 20:19:06.607828 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 8 20:19:06.621099 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (747) Oct 8 20:19:06.621156 kernel: BTRFS info (device vda6): first mount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 8 20:19:06.621187 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 8 20:19:06.621217 kernel: BTRFS info (device vda6): using free space tree Oct 8 20:19:06.622715 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 8 20:19:06.630042 kernel: BTRFS info (device vda6): auto enabling async discard Oct 8 20:19:06.631442 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 8 20:19:06.635621 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 8 20:19:06.785141 initrd-setup-root[774]: cut: /sysroot/etc/passwd: No such file or directory Oct 8 20:19:06.792375 initrd-setup-root[782]: cut: /sysroot/etc/group: No such file or directory Oct 8 20:19:06.800710 initrd-setup-root[789]: cut: /sysroot/etc/shadow: No such file or directory Oct 8 20:19:06.814690 initrd-setup-root[796]: cut: /sysroot/etc/gshadow: No such file or directory Oct 8 20:19:06.972136 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 8 20:19:06.988107 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 8 20:19:06.993238 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 8 20:19:07.014699 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 8 20:19:07.016987 kernel: BTRFS info (device vda6): last unmount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 8 20:19:07.061786 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 8 20:19:07.076952 ignition[863]: INFO : Ignition 2.19.0 Oct 8 20:19:07.076952 ignition[863]: INFO : Stage: mount Oct 8 20:19:07.076952 ignition[863]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 20:19:07.076952 ignition[863]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 8 20:19:07.080977 ignition[863]: INFO : mount: mount passed Oct 8 20:19:07.080977 ignition[863]: INFO : Ignition finished successfully Oct 8 20:19:07.081433 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 8 20:19:07.470095 systemd-networkd[707]: eth0: Gained IPv6LL Oct 8 20:19:13.894470 coreos-metadata[749]: Oct 08 20:19:13.894 WARN failed to locate config-drive, using the metadata service API instead Oct 8 20:19:13.935348 coreos-metadata[749]: Oct 08 20:19:13.935 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Oct 8 20:19:13.952681 coreos-metadata[749]: Oct 08 20:19:13.952 INFO Fetch successful Oct 8 20:19:13.954167 coreos-metadata[749]: Oct 08 20:19:13.953 INFO wrote hostname ci-4081-1-0-6-0b75032dd1.novalocal to /sysroot/etc/hostname Oct 8 20:19:13.959649 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Oct 8 20:19:13.959977 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Oct 8 20:19:13.970169 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 8 20:19:14.006267 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 8 20:19:14.021994 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (880) Oct 8 20:19:14.028232 kernel: BTRFS info (device vda6): first mount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 8 20:19:14.028305 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 8 20:19:14.031218 kernel: BTRFS info (device vda6): using free space tree Oct 8 20:19:14.039985 kernel: BTRFS info (device vda6): auto enabling async discard Oct 8 20:19:14.049778 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 8 20:19:14.099521 ignition[898]: INFO : Ignition 2.19.0 Oct 8 20:19:14.099521 ignition[898]: INFO : Stage: files Oct 8 20:19:14.102507 ignition[898]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 20:19:14.102507 ignition[898]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 8 20:19:14.107570 ignition[898]: DEBUG : files: compiled without relabeling support, skipping Oct 8 20:19:14.107570 ignition[898]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 8 20:19:14.107570 ignition[898]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 8 20:19:14.114171 ignition[898]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 8 20:19:14.116549 ignition[898]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 8 20:19:14.118612 ignition[898]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 8 20:19:14.116808 unknown[898]: wrote ssh authorized keys file for user: core Oct 8 20:19:14.122271 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Oct 8 20:19:14.122271 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Oct 8 20:19:14.122271 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Oct 8 20:19:14.122271 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Oct 8 20:19:14.192028 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Oct 8 20:19:14.482383 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Oct 8 20:19:14.482383 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Oct 8 20:19:14.487388 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Oct 8 20:19:14.487388 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 8 20:19:14.487388 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 8 20:19:14.487388 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 8 20:19:14.487388 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 8 20:19:14.487388 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 8 20:19:14.487388 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 8 20:19:14.487388 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 8 20:19:14.487388 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 8 20:19:14.487388 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Oct 8 20:19:14.487388 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Oct 8 20:19:14.487388 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Oct 8 20:19:14.487388 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Oct 8 20:19:15.038170 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Oct 8 20:19:16.585602 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Oct 8 20:19:16.587169 ignition[898]: INFO : files: op(c): [started] processing unit "containerd.service" Oct 8 20:19:16.589057 ignition[898]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Oct 8 20:19:16.589057 ignition[898]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Oct 8 20:19:16.589057 ignition[898]: INFO : files: op(c): [finished] processing unit "containerd.service" Oct 8 20:19:16.589057 ignition[898]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Oct 8 20:19:16.589057 ignition[898]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 8 20:19:16.589057 ignition[898]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 8 20:19:16.589057 ignition[898]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Oct 8 20:19:16.589057 ignition[898]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Oct 8 20:19:16.589057 ignition[898]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Oct 8 20:19:16.589057 ignition[898]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 8 20:19:16.589057 ignition[898]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 8 20:19:16.589057 ignition[898]: INFO : files: files passed Oct 8 20:19:16.589057 ignition[898]: INFO : Ignition finished successfully Oct 8 20:19:16.591288 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 8 20:19:16.599095 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 8 20:19:16.604600 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 8 20:19:16.627857 initrd-setup-root-after-ignition[925]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 8 20:19:16.627857 initrd-setup-root-after-ignition[925]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 8 20:19:16.611547 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 8 20:19:16.633286 initrd-setup-root-after-ignition[929]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 8 20:19:16.611671 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 8 20:19:16.631316 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 8 20:19:16.635538 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 8 20:19:16.648451 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 8 20:19:16.677098 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 8 20:19:16.677317 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 8 20:19:16.683767 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 8 20:19:16.685601 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 8 20:19:16.687471 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 8 20:19:16.692266 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 8 20:19:16.709337 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 8 20:19:16.717316 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 8 20:19:16.740349 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 8 20:19:16.742225 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 20:19:16.744083 systemd[1]: Stopped target timers.target - Timer Units. Oct 8 20:19:16.745369 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 8 20:19:16.745507 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 8 20:19:16.747108 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 8 20:19:16.747796 systemd[1]: Stopped target basic.target - Basic System. Oct 8 20:19:16.748781 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 8 20:19:16.749791 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 8 20:19:16.750987 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 8 20:19:16.752095 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 8 20:19:16.753249 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 8 20:19:16.754431 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 8 20:19:16.755589 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 8 20:19:16.756673 systemd[1]: Stopped target swap.target - Swaps. Oct 8 20:19:16.757589 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 8 20:19:16.757711 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 8 20:19:16.758866 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 8 20:19:16.759590 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 20:19:16.760662 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 8 20:19:16.760787 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 20:19:16.761754 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 8 20:19:16.761865 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 8 20:19:16.763144 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 8 20:19:16.763278 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 8 20:19:16.763945 systemd[1]: ignition-files.service: Deactivated successfully. Oct 8 20:19:16.764063 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 8 20:19:16.773193 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 8 20:19:16.778206 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 8 20:19:16.779098 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 8 20:19:16.779384 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 20:19:16.782675 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 8 20:19:16.784265 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 8 20:19:16.795115 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 8 20:19:16.795272 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 8 20:19:16.806368 ignition[950]: INFO : Ignition 2.19.0 Oct 8 20:19:16.806368 ignition[950]: INFO : Stage: umount Oct 8 20:19:16.810618 ignition[950]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 20:19:16.810618 ignition[950]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 8 20:19:16.810618 ignition[950]: INFO : umount: umount passed Oct 8 20:19:16.810618 ignition[950]: INFO : Ignition finished successfully Oct 8 20:19:16.811522 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 8 20:19:16.811911 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 8 20:19:16.816281 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 8 20:19:16.816339 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 8 20:19:16.819063 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 8 20:19:16.819118 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 8 20:19:16.820021 systemd[1]: ignition-fetch.service: Deactivated successfully. Oct 8 20:19:16.820079 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Oct 8 20:19:16.820846 systemd[1]: Stopped target network.target - Network. Oct 8 20:19:16.821435 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 8 20:19:16.821489 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 8 20:19:16.823253 systemd[1]: Stopped target paths.target - Path Units. Oct 8 20:19:16.825016 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 8 20:19:16.825256 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 20:19:16.826335 systemd[1]: Stopped target slices.target - Slice Units. Oct 8 20:19:16.827421 systemd[1]: Stopped target sockets.target - Socket Units. Oct 8 20:19:16.828706 systemd[1]: iscsid.socket: Deactivated successfully. Oct 8 20:19:16.828770 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 8 20:19:16.830028 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 8 20:19:16.830082 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 8 20:19:16.831106 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 8 20:19:16.831176 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 8 20:19:16.832513 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 8 20:19:16.832601 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 8 20:19:16.834652 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 8 20:19:16.835910 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 8 20:19:16.839155 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 8 20:19:16.840085 systemd-networkd[707]: eth0: DHCPv6 lease lost Oct 8 20:19:16.843612 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 8 20:19:16.844167 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 8 20:19:16.845535 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 8 20:19:16.845659 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 8 20:19:16.849598 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 8 20:19:16.849880 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 8 20:19:16.856097 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 8 20:19:16.857329 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 8 20:19:16.857420 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 8 20:19:16.859741 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 8 20:19:16.859813 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 8 20:19:16.861735 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 8 20:19:16.861808 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 8 20:19:16.867379 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 8 20:19:16.867448 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 8 20:19:16.868749 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 20:19:16.879303 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 8 20:19:16.879487 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 20:19:16.880691 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 8 20:19:16.880823 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 8 20:19:16.882469 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 8 20:19:16.882561 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 8 20:19:16.884102 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 8 20:19:16.884176 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 20:19:16.885312 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 8 20:19:16.885387 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 8 20:19:16.887267 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 8 20:19:16.887324 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 8 20:19:16.888624 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 8 20:19:16.888685 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 20:19:16.898398 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 8 20:19:16.899127 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 8 20:19:16.899198 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 20:19:16.899996 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Oct 8 20:19:16.900063 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 8 20:19:16.901631 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 8 20:19:16.901697 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 20:19:16.906024 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 8 20:19:16.906102 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 20:19:16.909159 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 8 20:19:16.909305 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 8 20:19:17.219833 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 8 20:19:17.220178 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 8 20:19:17.223585 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 8 20:19:17.225414 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 8 20:19:17.225543 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 8 20:19:17.235386 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 8 20:19:17.257258 systemd[1]: Switching root. Oct 8 20:19:17.299685 systemd-journald[183]: Journal stopped Oct 8 20:19:18.879255 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Oct 8 20:19:18.879304 kernel: SELinux: policy capability network_peer_controls=1 Oct 8 20:19:18.879322 kernel: SELinux: policy capability open_perms=1 Oct 8 20:19:18.879333 kernel: SELinux: policy capability extended_socket_class=1 Oct 8 20:19:18.879348 kernel: SELinux: policy capability always_check_network=0 Oct 8 20:19:18.879360 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 8 20:19:18.879372 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 8 20:19:18.879387 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 8 20:19:18.879398 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 8 20:19:18.879410 kernel: audit: type=1403 audit(1728418757.919:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 8 20:19:18.879425 systemd[1]: Successfully loaded SELinux policy in 71.972ms. Oct 8 20:19:18.879443 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 25.366ms. Oct 8 20:19:18.879462 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 8 20:19:18.879475 systemd[1]: Detected virtualization kvm. Oct 8 20:19:18.879487 systemd[1]: Detected architecture x86-64. Oct 8 20:19:18.879499 systemd[1]: Detected first boot. Oct 8 20:19:18.879512 systemd[1]: Hostname set to . Oct 8 20:19:18.879524 systemd[1]: Initializing machine ID from VM UUID. Oct 8 20:19:18.879536 zram_generator::config[1010]: No configuration found. Oct 8 20:19:18.879549 systemd[1]: Populated /etc with preset unit settings. Oct 8 20:19:18.879563 systemd[1]: Queued start job for default target multi-user.target. Oct 8 20:19:18.879576 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 8 20:19:18.879589 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 8 20:19:18.879605 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 8 20:19:18.879617 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 8 20:19:18.879629 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 8 20:19:18.879642 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 8 20:19:18.879654 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 8 20:19:18.879667 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 8 20:19:18.879681 systemd[1]: Created slice user.slice - User and Session Slice. Oct 8 20:19:18.879693 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 20:19:18.879705 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 20:19:18.879717 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 8 20:19:18.879731 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 8 20:19:18.879743 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 8 20:19:18.879756 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 8 20:19:18.879767 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 8 20:19:18.879779 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 20:19:18.879794 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 8 20:19:18.879806 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 20:19:18.879818 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 8 20:19:18.879830 systemd[1]: Reached target slices.target - Slice Units. Oct 8 20:19:18.879841 systemd[1]: Reached target swap.target - Swaps. Oct 8 20:19:18.879853 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 8 20:19:18.879867 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 8 20:19:18.879880 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 8 20:19:18.879892 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 8 20:19:18.879904 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 8 20:19:18.879920 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 8 20:19:18.879948 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 20:19:18.879961 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 8 20:19:18.879973 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 8 20:19:18.879985 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 8 20:19:18.879997 systemd[1]: Mounting media.mount - External Media Directory... Oct 8 20:19:18.880015 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 20:19:18.880027 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 8 20:19:18.880039 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 8 20:19:18.880052 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 8 20:19:18.880065 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 8 20:19:18.880077 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 20:19:18.880089 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 8 20:19:18.880102 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 8 20:19:18.880116 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 20:19:18.880181 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 8 20:19:18.880195 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 20:19:18.880207 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 8 20:19:18.880219 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 20:19:18.880231 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 8 20:19:18.880244 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Oct 8 20:19:18.880257 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Oct 8 20:19:18.880271 kernel: loop: module loaded Oct 8 20:19:18.880283 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 8 20:19:18.880294 kernel: fuse: init (API version 7.39) Oct 8 20:19:18.880306 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 8 20:19:18.880317 kernel: ACPI: bus type drm_connector registered Oct 8 20:19:18.880331 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 8 20:19:18.880362 systemd-journald[1119]: Collecting audit messages is disabled. Oct 8 20:19:18.880388 systemd-journald[1119]: Journal started Oct 8 20:19:18.880416 systemd-journald[1119]: Runtime Journal (/run/log/journal/e4b962570e9f4a628291dd8987ded63c) is 4.9M, max 39.3M, 34.4M free. Oct 8 20:19:18.888103 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 8 20:19:18.894946 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 8 20:19:18.898268 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 20:19:18.911992 systemd[1]: Started systemd-journald.service - Journal Service. Oct 8 20:19:18.914767 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 8 20:19:18.915504 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 8 20:19:18.916235 systemd[1]: Mounted media.mount - External Media Directory. Oct 8 20:19:18.916863 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 8 20:19:18.917576 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 8 20:19:18.918193 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 8 20:19:18.919035 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 8 20:19:18.919883 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 20:19:18.920799 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 8 20:19:18.921013 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 8 20:19:18.921815 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 20:19:18.922139 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 20:19:18.922903 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 8 20:19:18.923091 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 8 20:19:18.923859 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 20:19:18.924208 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 20:19:18.925385 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 8 20:19:18.925541 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 8 20:19:18.926291 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 20:19:18.926484 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 20:19:18.929435 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 8 20:19:18.930230 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 8 20:19:18.932342 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 8 20:19:18.942083 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 8 20:19:18.947065 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 8 20:19:18.952068 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 8 20:19:18.952789 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 8 20:19:18.964583 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 8 20:19:18.967045 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 8 20:19:18.970048 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 8 20:19:18.979361 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 8 20:19:18.980034 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 8 20:19:18.985168 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 8 20:19:18.994197 systemd-journald[1119]: Time spent on flushing to /var/log/journal/e4b962570e9f4a628291dd8987ded63c is 46.939ms for 923 entries. Oct 8 20:19:18.994197 systemd-journald[1119]: System Journal (/var/log/journal/e4b962570e9f4a628291dd8987ded63c) is 8.0M, max 584.8M, 576.8M free. Oct 8 20:19:19.061631 systemd-journald[1119]: Received client request to flush runtime journal. Oct 8 20:19:18.997129 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 8 20:19:19.004677 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 20:19:19.006190 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 8 20:19:19.008060 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 8 20:19:19.018140 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Oct 8 20:19:19.031206 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 8 20:19:19.031897 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 8 20:19:19.052255 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 8 20:19:19.067378 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 8 20:19:19.070454 systemd-tmpfiles[1168]: ACLs are not supported, ignoring. Oct 8 20:19:19.071162 udevadm[1173]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Oct 8 20:19:19.071264 systemd-tmpfiles[1168]: ACLs are not supported, ignoring. Oct 8 20:19:19.076983 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 8 20:19:19.086426 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 8 20:19:19.133254 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 8 20:19:19.146103 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 8 20:19:19.154581 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Oct 8 20:19:19.154606 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Oct 8 20:19:19.159435 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 20:19:19.803711 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 8 20:19:19.812216 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 20:19:19.875376 systemd-udevd[1196]: Using default interface naming scheme 'v255'. Oct 8 20:19:19.913202 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 20:19:19.920129 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 8 20:19:19.940197 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 8 20:19:19.975143 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Oct 8 20:19:20.017954 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1201) Oct 8 20:19:20.043058 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1201) Oct 8 20:19:20.041280 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 8 20:19:20.095003 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Oct 8 20:19:20.101961 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1199) Oct 8 20:19:20.130991 kernel: ACPI: button: Power Button [PWRF] Oct 8 20:19:20.132426 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 8 20:19:20.144970 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Oct 8 20:19:20.165979 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Oct 8 20:19:20.175036 systemd-networkd[1198]: lo: Link UP Oct 8 20:19:20.175045 systemd-networkd[1198]: lo: Gained carrier Oct 8 20:19:20.176409 systemd-networkd[1198]: Enumeration completed Oct 8 20:19:20.176549 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 8 20:19:20.179049 systemd-networkd[1198]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 20:19:20.179059 systemd-networkd[1198]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 8 20:19:20.180380 systemd-networkd[1198]: eth0: Link UP Oct 8 20:19:20.180390 systemd-networkd[1198]: eth0: Gained carrier Oct 8 20:19:20.180404 systemd-networkd[1198]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 20:19:20.185196 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 8 20:19:20.190984 systemd-networkd[1198]: eth0: DHCPv4 address 172.24.4.55/24, gateway 172.24.4.1 acquired from 172.24.4.1 Oct 8 20:19:20.214976 kernel: mousedev: PS/2 mouse device common for all mice Oct 8 20:19:20.216418 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 20:19:20.233050 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Oct 8 20:19:20.233121 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Oct 8 20:19:20.236976 kernel: Console: switching to colour dummy device 80x25 Oct 8 20:19:20.237976 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Oct 8 20:19:20.238005 kernel: [drm] features: -context_init Oct 8 20:19:20.240046 kernel: [drm] number of scanouts: 1 Oct 8 20:19:20.240085 kernel: [drm] number of cap sets: 0 Oct 8 20:19:20.241595 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Oct 8 20:19:20.252304 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Oct 8 20:19:20.252414 kernel: Console: switching to colour frame buffer device 128x48 Oct 8 20:19:20.258972 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Oct 8 20:19:20.260507 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 8 20:19:20.260804 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 20:19:20.270213 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 20:19:20.283143 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Oct 8 20:19:20.289055 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Oct 8 20:19:20.291295 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 8 20:19:20.292040 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 20:19:20.293631 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 20:19:20.311483 lvm[1245]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 8 20:19:20.343764 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Oct 8 20:19:20.344033 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 8 20:19:20.351296 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Oct 8 20:19:20.356428 lvm[1251]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 8 20:19:20.382033 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Oct 8 20:19:20.382420 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 8 20:19:20.382516 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 8 20:19:20.382535 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 8 20:19:20.382593 systemd[1]: Reached target machines.target - Containers. Oct 8 20:19:20.384154 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Oct 8 20:19:20.388108 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 8 20:19:20.392829 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 8 20:19:20.393141 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 20:19:20.406238 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 8 20:19:20.421086 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Oct 8 20:19:20.430979 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 8 20:19:20.434187 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 20:19:20.434565 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 8 20:19:20.442661 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 8 20:19:20.464016 kernel: loop0: detected capacity change from 0 to 140768 Oct 8 20:19:20.498669 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 8 20:19:20.499982 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Oct 8 20:19:20.553988 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 8 20:19:20.579032 kernel: loop1: detected capacity change from 0 to 211296 Oct 8 20:19:20.648779 kernel: loop2: detected capacity change from 0 to 8 Oct 8 20:19:20.677459 kernel: loop3: detected capacity change from 0 to 142488 Oct 8 20:19:20.768793 kernel: loop4: detected capacity change from 0 to 140768 Oct 8 20:19:20.806732 kernel: loop5: detected capacity change from 0 to 211296 Oct 8 20:19:20.847459 kernel: loop6: detected capacity change from 0 to 8 Oct 8 20:19:20.847675 kernel: loop7: detected capacity change from 0 to 142488 Oct 8 20:19:20.895858 (sd-merge)[1278]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Oct 8 20:19:20.897083 (sd-merge)[1278]: Merged extensions into '/usr'. Oct 8 20:19:20.904810 systemd[1]: Reloading requested from client PID 1262 ('systemd-sysext') (unit systemd-sysext.service)... Oct 8 20:19:20.904853 systemd[1]: Reloading... Oct 8 20:19:20.999975 zram_generator::config[1306]: No configuration found. Oct 8 20:19:21.182409 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 20:19:21.248782 systemd[1]: Reloading finished in 343 ms. Oct 8 20:19:21.261871 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 8 20:19:21.278136 systemd[1]: Starting ensure-sysext.service... Oct 8 20:19:21.283111 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 8 20:19:21.293173 systemd-networkd[1198]: eth0: Gained IPv6LL Oct 8 20:19:21.298105 systemd[1]: Reloading requested from client PID 1367 ('systemctl') (unit ensure-sysext.service)... Oct 8 20:19:21.298127 systemd[1]: Reloading... Oct 8 20:19:21.328164 systemd-tmpfiles[1368]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 8 20:19:21.328537 systemd-tmpfiles[1368]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 8 20:19:21.329488 systemd-tmpfiles[1368]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 8 20:19:21.329817 systemd-tmpfiles[1368]: ACLs are not supported, ignoring. Oct 8 20:19:21.329882 systemd-tmpfiles[1368]: ACLs are not supported, ignoring. Oct 8 20:19:21.331391 ldconfig[1256]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 8 20:19:21.335493 systemd-tmpfiles[1368]: Detected autofs mount point /boot during canonicalization of boot. Oct 8 20:19:21.335596 systemd-tmpfiles[1368]: Skipping /boot Oct 8 20:19:21.348211 systemd-tmpfiles[1368]: Detected autofs mount point /boot during canonicalization of boot. Oct 8 20:19:21.348360 systemd-tmpfiles[1368]: Skipping /boot Oct 8 20:19:21.387852 zram_generator::config[1400]: No configuration found. Oct 8 20:19:21.532244 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 20:19:21.597620 systemd[1]: Reloading finished in 299 ms. Oct 8 20:19:21.615248 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 8 20:19:21.617793 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 8 20:19:21.626407 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 8 20:19:21.640191 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 8 20:19:21.650071 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 8 20:19:21.663084 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 8 20:19:21.670196 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 8 20:19:21.693095 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 8 20:19:21.718225 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 20:19:21.718467 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 20:19:21.727363 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 20:19:21.734485 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 20:19:21.740123 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 20:19:21.740772 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 20:19:21.747841 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 20:19:21.766032 augenrules[1489]: No rules Oct 8 20:19:21.757393 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 8 20:19:21.767273 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 20:19:21.767472 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 20:19:21.768627 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 20:19:21.768784 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 20:19:21.780246 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 8 20:19:21.793267 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 8 20:19:21.803419 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 8 20:19:21.808186 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 20:19:21.808564 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 20:19:21.823099 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 20:19:21.823469 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 20:19:21.824521 systemd-resolved[1471]: Positive Trust Anchors: Oct 8 20:19:21.824540 systemd-resolved[1471]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 8 20:19:21.824586 systemd-resolved[1471]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 8 20:19:21.829739 systemd-resolved[1471]: Using system hostname 'ci-4081-1-0-6-0b75032dd1.novalocal'. Oct 8 20:19:21.830298 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 20:19:21.843288 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 20:19:21.853246 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 20:19:21.854225 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 20:19:21.860792 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 8 20:19:21.861471 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 8 20:19:21.861583 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 20:19:21.862551 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 8 20:19:21.869964 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 20:19:21.870339 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 20:19:21.875538 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 20:19:21.875751 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 20:19:21.876860 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 20:19:21.877056 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 20:19:21.878266 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 8 20:19:21.886304 systemd[1]: Reached target network.target - Network. Oct 8 20:19:21.888819 systemd[1]: Reached target network-online.target - Network is Online. Oct 8 20:19:21.890298 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 8 20:19:21.891962 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 20:19:21.892367 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 20:19:21.899270 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 20:19:21.903337 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 8 20:19:21.906872 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 20:19:21.918251 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 20:19:21.920877 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 20:19:21.921344 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 8 20:19:21.923267 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 20:19:21.926236 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 20:19:21.926508 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 20:19:21.931522 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 8 20:19:21.931700 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 8 20:19:21.935490 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 20:19:21.935681 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 20:19:21.936657 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 20:19:21.936819 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 20:19:21.942812 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 8 20:19:21.943156 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 8 20:19:21.945582 systemd[1]: Finished ensure-sysext.service. Oct 8 20:19:21.957087 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 8 20:19:22.021644 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 8 20:19:22.022781 systemd[1]: Reached target sysinit.target - System Initialization. Oct 8 20:19:22.023370 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 8 20:19:22.023891 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 8 20:19:22.028008 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 8 20:19:22.028545 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 8 20:19:22.028582 systemd[1]: Reached target paths.target - Path Units. Oct 8 20:19:22.029188 systemd[1]: Reached target time-set.target - System Time Set. Oct 8 20:19:22.030876 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 8 20:19:22.032491 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 8 20:19:22.033900 systemd[1]: Reached target timers.target - Timer Units. Oct 8 20:19:22.035912 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 8 20:19:22.039030 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 8 20:19:22.046023 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 8 20:19:22.048426 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 8 20:19:22.050494 systemd[1]: Reached target sockets.target - Socket Units. Oct 8 20:19:22.051830 systemd[1]: Reached target basic.target - Basic System. Oct 8 20:19:22.752601 systemd-resolved[1471]: Clock change detected. Flushing caches. Oct 8 20:19:22.752681 systemd-timesyncd[1537]: Contacted time server 51.15.182.163:123 (0.flatcar.pool.ntp.org). Oct 8 20:19:22.752734 systemd-timesyncd[1537]: Initial clock synchronization to Tue 2024-10-08 20:19:22.752551 UTC. Oct 8 20:19:22.753591 systemd[1]: System is tainted: cgroupsv1 Oct 8 20:19:22.753638 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 8 20:19:22.753663 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 8 20:19:22.764023 systemd[1]: Starting containerd.service - containerd container runtime... Oct 8 20:19:22.769902 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Oct 8 20:19:22.785285 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 8 20:19:22.796083 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 8 20:19:22.808188 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 8 20:19:22.810059 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 8 20:19:22.816063 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:19:22.829773 jq[1545]: false Oct 8 20:19:22.831198 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 8 20:19:22.843456 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 8 20:19:22.858092 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 8 20:19:22.862476 extend-filesystems[1548]: Found loop4 Oct 8 20:19:22.862476 extend-filesystems[1548]: Found loop5 Oct 8 20:19:22.862476 extend-filesystems[1548]: Found loop6 Oct 8 20:19:22.862476 extend-filesystems[1548]: Found loop7 Oct 8 20:19:22.862476 extend-filesystems[1548]: Found vda Oct 8 20:19:22.862476 extend-filesystems[1548]: Found vda1 Oct 8 20:19:22.862476 extend-filesystems[1548]: Found vda2 Oct 8 20:19:22.862476 extend-filesystems[1548]: Found vda3 Oct 8 20:19:22.862476 extend-filesystems[1548]: Found usr Oct 8 20:19:22.862476 extend-filesystems[1548]: Found vda4 Oct 8 20:19:22.862476 extend-filesystems[1548]: Found vda6 Oct 8 20:19:22.862476 extend-filesystems[1548]: Found vda7 Oct 8 20:19:22.862476 extend-filesystems[1548]: Found vda9 Oct 8 20:19:22.862476 extend-filesystems[1548]: Checking size of /dev/vda9 Oct 8 20:19:22.960781 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 4635643 blocks Oct 8 20:19:22.960828 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1203) Oct 8 20:19:22.859665 dbus-daemon[1544]: [system] SELinux support is enabled Oct 8 20:19:22.961314 extend-filesystems[1548]: Resized partition /dev/vda9 Oct 8 20:19:22.878184 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 8 20:19:22.962250 extend-filesystems[1571]: resize2fs 1.47.1 (20-May-2024) Oct 8 20:19:22.894487 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 8 20:19:22.911579 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 8 20:19:22.940518 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 8 20:19:22.960648 systemd[1]: Starting update-engine.service - Update Engine... Oct 8 20:19:22.968213 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 8 20:19:22.969647 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 8 20:19:22.986443 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 8 20:19:22.987270 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 8 20:19:22.990462 systemd[1]: motdgen.service: Deactivated successfully. Oct 8 20:19:22.990711 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 8 20:19:23.001559 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 8 20:19:23.009699 update_engine[1579]: I20241008 20:19:23.009232 1579 main.cc:92] Flatcar Update Engine starting Oct 8 20:19:23.010585 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 8 20:19:23.019335 jq[1580]: true Oct 8 20:19:23.010917 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 8 20:19:23.026319 update_engine[1579]: I20241008 20:19:23.026258 1579 update_check_scheduler.cc:74] Next update check in 9m11s Oct 8 20:19:23.050375 (ntainerd)[1597]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 8 20:19:23.070036 jq[1602]: true Oct 8 20:19:23.089440 tar[1586]: linux-amd64/helm Oct 8 20:19:23.081360 systemd[1]: Started update-engine.service - Update Engine. Oct 8 20:19:23.084725 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 8 20:19:23.084755 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 8 20:19:23.085274 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 8 20:19:23.085291 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 8 20:19:23.091443 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 8 20:19:23.099212 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 8 20:19:23.180023 kernel: EXT4-fs (vda9): resized filesystem to 4635643 Oct 8 20:19:23.225796 locksmithd[1618]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 8 20:19:23.244732 systemd-logind[1572]: New seat seat0. Oct 8 20:19:23.284088 systemd-logind[1572]: Watching system buttons on /dev/input/event1 (Power Button) Oct 8 20:19:23.284109 systemd-logind[1572]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 8 20:19:23.284473 systemd[1]: Started systemd-logind.service - User Login Management. Oct 8 20:19:23.291784 extend-filesystems[1571]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 8 20:19:23.291784 extend-filesystems[1571]: old_desc_blocks = 1, new_desc_blocks = 3 Oct 8 20:19:23.291784 extend-filesystems[1571]: The filesystem on /dev/vda9 is now 4635643 (4k) blocks long. Oct 8 20:19:23.327850 extend-filesystems[1548]: Resized filesystem in /dev/vda9 Oct 8 20:19:23.331748 bash[1624]: Updated "/home/core/.ssh/authorized_keys" Oct 8 20:19:23.292114 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 8 20:19:23.292422 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 8 20:19:23.314543 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 8 20:19:23.342375 systemd[1]: Starting sshkeys.service... Oct 8 20:19:23.382617 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Oct 8 20:19:23.396466 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Oct 8 20:19:23.591160 containerd[1597]: time="2024-10-08T20:19:23.590986228Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Oct 8 20:19:23.691124 containerd[1597]: time="2024-10-08T20:19:23.691003891Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 8 20:19:23.696645 containerd[1597]: time="2024-10-08T20:19:23.696567496Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.54-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 8 20:19:23.697565 containerd[1597]: time="2024-10-08T20:19:23.697540280Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 8 20:19:23.697691 containerd[1597]: time="2024-10-08T20:19:23.697673591Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 8 20:19:23.701002 containerd[1597]: time="2024-10-08T20:19:23.698217661Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Oct 8 20:19:23.701002 containerd[1597]: time="2024-10-08T20:19:23.698251695Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Oct 8 20:19:23.701002 containerd[1597]: time="2024-10-08T20:19:23.698324872Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 20:19:23.701002 containerd[1597]: time="2024-10-08T20:19:23.698346222Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 8 20:19:23.701002 containerd[1597]: time="2024-10-08T20:19:23.698643630Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 20:19:23.701002 containerd[1597]: time="2024-10-08T20:19:23.698664920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 8 20:19:23.701002 containerd[1597]: time="2024-10-08T20:19:23.698689155Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 20:19:23.701002 containerd[1597]: time="2024-10-08T20:19:23.698703282Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 8 20:19:23.701002 containerd[1597]: time="2024-10-08T20:19:23.698796436Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 8 20:19:23.701002 containerd[1597]: time="2024-10-08T20:19:23.699091430Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 8 20:19:23.701002 containerd[1597]: time="2024-10-08T20:19:23.699251209Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 20:19:23.701542 containerd[1597]: time="2024-10-08T20:19:23.699271628Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 8 20:19:23.701542 containerd[1597]: time="2024-10-08T20:19:23.699402693Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 8 20:19:23.701542 containerd[1597]: time="2024-10-08T20:19:23.699464279Z" level=info msg="metadata content store policy set" policy=shared Oct 8 20:19:23.723631 containerd[1597]: time="2024-10-08T20:19:23.723578736Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 8 20:19:23.724735 containerd[1597]: time="2024-10-08T20:19:23.724714826Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 8 20:19:23.724865 containerd[1597]: time="2024-10-08T20:19:23.724844940Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Oct 8 20:19:23.724997 containerd[1597]: time="2024-10-08T20:19:23.724980014Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Oct 8 20:19:23.725091 containerd[1597]: time="2024-10-08T20:19:23.725075022Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 8 20:19:23.725343 containerd[1597]: time="2024-10-08T20:19:23.725322997Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 8 20:19:23.727934 containerd[1597]: time="2024-10-08T20:19:23.727905731Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 8 20:19:23.728549 containerd[1597]: time="2024-10-08T20:19:23.728530813Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Oct 8 20:19:23.728648 containerd[1597]: time="2024-10-08T20:19:23.728631031Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Oct 8 20:19:23.728738 containerd[1597]: time="2024-10-08T20:19:23.728722042Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Oct 8 20:19:23.728828 containerd[1597]: time="2024-10-08T20:19:23.728811790Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 8 20:19:23.729229 containerd[1597]: time="2024-10-08T20:19:23.728948737Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 8 20:19:23.729326 containerd[1597]: time="2024-10-08T20:19:23.729310325Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 8 20:19:23.729440 containerd[1597]: time="2024-10-08T20:19:23.729422065Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 8 20:19:23.729534 containerd[1597]: time="2024-10-08T20:19:23.729517664Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 8 20:19:23.729637 containerd[1597]: time="2024-10-08T20:19:23.729619916Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 8 20:19:23.729775 containerd[1597]: time="2024-10-08T20:19:23.729757113Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 8 20:19:23.730156 containerd[1597]: time="2024-10-08T20:19:23.730088044Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 8 20:19:23.730156 containerd[1597]: time="2024-10-08T20:19:23.730123580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 8 20:19:23.730279 containerd[1597]: time="2024-10-08T20:19:23.730260117Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 8 20:19:23.730481 containerd[1597]: time="2024-10-08T20:19:23.730342000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 8 20:19:23.730481 containerd[1597]: time="2024-10-08T20:19:23.730370644Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 8 20:19:23.730863 containerd[1597]: time="2024-10-08T20:19:23.730387896Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 8 20:19:23.730863 containerd[1597]: time="2024-10-08T20:19:23.730798566Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 8 20:19:23.730863 containerd[1597]: time="2024-10-08T20:19:23.730818624Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 8 20:19:23.731126 containerd[1597]: time="2024-10-08T20:19:23.730835786Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 8 20:19:23.731126 containerd[1597]: time="2024-10-08T20:19:23.730986108Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Oct 8 20:19:23.731596 containerd[1597]: time="2024-10-08T20:19:23.731011145Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Oct 8 20:19:23.731596 containerd[1597]: time="2024-10-08T20:19:23.731304505Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 8 20:19:23.731596 containerd[1597]: time="2024-10-08T20:19:23.731325485Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Oct 8 20:19:23.731596 containerd[1597]: time="2024-10-08T20:19:23.731519438Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 8 20:19:23.731596 containerd[1597]: time="2024-10-08T20:19:23.731546990Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Oct 8 20:19:23.732167 containerd[1597]: time="2024-10-08T20:19:23.731574792Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Oct 8 20:19:23.732167 containerd[1597]: time="2024-10-08T20:19:23.731769047Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 8 20:19:23.732167 containerd[1597]: time="2024-10-08T20:19:23.731786710Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 8 20:19:23.732167 containerd[1597]: time="2024-10-08T20:19:23.732118522Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 8 20:19:23.732438 containerd[1597]: time="2024-10-08T20:19:23.732415379Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Oct 8 20:19:23.732899 containerd[1597]: time="2024-10-08T20:19:23.732739186Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 8 20:19:23.732899 containerd[1597]: time="2024-10-08T20:19:23.732764263Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Oct 8 20:19:23.732899 containerd[1597]: time="2024-10-08T20:19:23.732777638Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 8 20:19:23.732899 containerd[1597]: time="2024-10-08T20:19:23.732793538Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Oct 8 20:19:23.732899 containerd[1597]: time="2024-10-08T20:19:23.732827482Z" level=info msg="NRI interface is disabled by configuration." Oct 8 20:19:23.732899 containerd[1597]: time="2024-10-08T20:19:23.732841598Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 8 20:19:23.734737 containerd[1597]: time="2024-10-08T20:19:23.734044564Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 8 20:19:23.734737 containerd[1597]: time="2024-10-08T20:19:23.734119625Z" level=info msg="Connect containerd service" Oct 8 20:19:23.734737 containerd[1597]: time="2024-10-08T20:19:23.734160852Z" level=info msg="using legacy CRI server" Oct 8 20:19:23.734737 containerd[1597]: time="2024-10-08T20:19:23.734171733Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 8 20:19:23.734737 containerd[1597]: time="2024-10-08T20:19:23.734275668Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 8 20:19:23.738531 containerd[1597]: time="2024-10-08T20:19:23.737401811Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 8 20:19:23.738531 containerd[1597]: time="2024-10-08T20:19:23.737826387Z" level=info msg="Start subscribing containerd event" Oct 8 20:19:23.738531 containerd[1597]: time="2024-10-08T20:19:23.737901268Z" level=info msg="Start recovering state" Oct 8 20:19:23.738531 containerd[1597]: time="2024-10-08T20:19:23.737995154Z" level=info msg="Start event monitor" Oct 8 20:19:23.738531 containerd[1597]: time="2024-10-08T20:19:23.738018938Z" level=info msg="Start snapshots syncer" Oct 8 20:19:23.738531 containerd[1597]: time="2024-10-08T20:19:23.738029769Z" level=info msg="Start cni network conf syncer for default" Oct 8 20:19:23.738531 containerd[1597]: time="2024-10-08T20:19:23.738038876Z" level=info msg="Start streaming server" Oct 8 20:19:23.741302 containerd[1597]: time="2024-10-08T20:19:23.739655077Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 8 20:19:23.741302 containerd[1597]: time="2024-10-08T20:19:23.739736409Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 8 20:19:23.741302 containerd[1597]: time="2024-10-08T20:19:23.739809166Z" level=info msg="containerd successfully booted in 0.151722s" Oct 8 20:19:23.739979 systemd[1]: Started containerd.service - containerd container runtime. Oct 8 20:19:24.001698 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 8 20:19:24.101924 sshd_keygen[1578]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 8 20:19:24.132506 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 8 20:19:24.141132 tar[1586]: linux-amd64/LICENSE Oct 8 20:19:24.141132 tar[1586]: linux-amd64/README.md Oct 8 20:19:24.149323 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 8 20:19:24.154555 systemd[1]: Started sshd@0-172.24.4.55:22-172.24.4.1:54294.service - OpenSSH per-connection server daemon (172.24.4.1:54294). Oct 8 20:19:24.162935 systemd[1]: issuegen.service: Deactivated successfully. Oct 8 20:19:24.164678 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 8 20:19:24.183278 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 8 20:19:24.191170 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 8 20:19:24.201877 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 8 20:19:24.211559 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 8 20:19:24.214810 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 8 20:19:24.221001 systemd[1]: Reached target getty.target - Login Prompts. Oct 8 20:19:24.929836 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:19:24.953060 (kubelet)[1682]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:19:25.230019 sshd[1660]: Accepted publickey for core from 172.24.4.1 port 54294 ssh2: RSA SHA256:N4tAxOYyt600zP8LzVHN9krjQqk3csZTCmZq/eMm2uA Oct 8 20:19:25.237129 sshd[1660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:19:25.260111 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 8 20:19:25.279493 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 8 20:19:25.291524 systemd-logind[1572]: New session 1 of user core. Oct 8 20:19:25.300010 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 8 20:19:25.314342 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 8 20:19:25.321311 (systemd)[1692]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 8 20:19:25.439019 systemd[1692]: Queued start job for default target default.target. Oct 8 20:19:25.439399 systemd[1692]: Created slice app.slice - User Application Slice. Oct 8 20:19:25.439423 systemd[1692]: Reached target paths.target - Paths. Oct 8 20:19:25.439437 systemd[1692]: Reached target timers.target - Timers. Oct 8 20:19:25.446077 systemd[1692]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 8 20:19:25.456615 systemd[1692]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 8 20:19:25.456810 systemd[1692]: Reached target sockets.target - Sockets. Oct 8 20:19:25.456903 systemd[1692]: Reached target basic.target - Basic System. Oct 8 20:19:25.457040 systemd[1692]: Reached target default.target - Main User Target. Oct 8 20:19:25.457162 systemd[1692]: Startup finished in 129ms. Oct 8 20:19:25.457783 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 8 20:19:25.478135 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 8 20:19:26.053385 systemd[1]: Started sshd@1-172.24.4.55:22-172.24.4.1:43734.service - OpenSSH per-connection server daemon (172.24.4.1:43734). Oct 8 20:19:26.195545 kubelet[1682]: E1008 20:19:26.195399 1682 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:19:26.199158 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:19:26.199544 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:19:28.463561 sshd[1707]: Accepted publickey for core from 172.24.4.1 port 43734 ssh2: RSA SHA256:N4tAxOYyt600zP8LzVHN9krjQqk3csZTCmZq/eMm2uA Oct 8 20:19:28.466423 sshd[1707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:19:28.478182 systemd-logind[1572]: New session 2 of user core. Oct 8 20:19:28.486692 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 8 20:19:29.113378 sshd[1707]: pam_unix(sshd:session): session closed for user core Oct 8 20:19:29.130870 systemd[1]: Started sshd@2-172.24.4.55:22-172.24.4.1:43746.service - OpenSSH per-connection server daemon (172.24.4.1:43746). Oct 8 20:19:29.140681 systemd[1]: sshd@1-172.24.4.55:22-172.24.4.1:43734.service: Deactivated successfully. Oct 8 20:19:29.145504 systemd[1]: session-2.scope: Deactivated successfully. Oct 8 20:19:29.148712 systemd-logind[1572]: Session 2 logged out. Waiting for processes to exit. Oct 8 20:19:29.151838 systemd-logind[1572]: Removed session 2. Oct 8 20:19:29.266564 login[1673]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Oct 8 20:19:29.268597 login[1672]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Oct 8 20:19:29.278367 systemd-logind[1572]: New session 3 of user core. Oct 8 20:19:29.289693 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 8 20:19:29.296247 systemd-logind[1572]: New session 4 of user core. Oct 8 20:19:29.306446 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 8 20:19:29.921996 coreos-metadata[1543]: Oct 08 20:19:29.921 WARN failed to locate config-drive, using the metadata service API instead Oct 8 20:19:29.979985 coreos-metadata[1543]: Oct 08 20:19:29.979 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Oct 8 20:19:30.316900 coreos-metadata[1543]: Oct 08 20:19:30.316 INFO Fetch successful Oct 8 20:19:30.316900 coreos-metadata[1543]: Oct 08 20:19:30.316 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Oct 8 20:19:30.324484 sshd[1716]: Accepted publickey for core from 172.24.4.1 port 43746 ssh2: RSA SHA256:N4tAxOYyt600zP8LzVHN9krjQqk3csZTCmZq/eMm2uA Oct 8 20:19:30.327137 sshd[1716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:19:30.330556 coreos-metadata[1543]: Oct 08 20:19:30.330 INFO Fetch successful Oct 8 20:19:30.330556 coreos-metadata[1543]: Oct 08 20:19:30.330 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Oct 8 20:19:30.337432 systemd-logind[1572]: New session 5 of user core. Oct 8 20:19:30.345723 coreos-metadata[1543]: Oct 08 20:19:30.345 INFO Fetch successful Oct 8 20:19:30.345823 coreos-metadata[1543]: Oct 08 20:19:30.345 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Oct 8 20:19:30.350721 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 8 20:19:30.360630 coreos-metadata[1543]: Oct 08 20:19:30.359 INFO Fetch successful Oct 8 20:19:30.360630 coreos-metadata[1543]: Oct 08 20:19:30.359 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Oct 8 20:19:30.368795 coreos-metadata[1543]: Oct 08 20:19:30.368 INFO Fetch successful Oct 8 20:19:30.368795 coreos-metadata[1543]: Oct 08 20:19:30.368 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Oct 8 20:19:30.376886 coreos-metadata[1543]: Oct 08 20:19:30.376 INFO Fetch successful Oct 8 20:19:30.441230 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Oct 8 20:19:30.443252 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 8 20:19:30.497373 coreos-metadata[1636]: Oct 08 20:19:30.497 WARN failed to locate config-drive, using the metadata service API instead Oct 8 20:19:30.542906 coreos-metadata[1636]: Oct 08 20:19:30.542 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Oct 8 20:19:30.558284 coreos-metadata[1636]: Oct 08 20:19:30.558 INFO Fetch successful Oct 8 20:19:30.558284 coreos-metadata[1636]: Oct 08 20:19:30.558 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Oct 8 20:19:30.572505 coreos-metadata[1636]: Oct 08 20:19:30.572 INFO Fetch successful Oct 8 20:19:30.578227 unknown[1636]: wrote ssh authorized keys file for user: core Oct 8 20:19:30.617274 update-ssh-keys[1762]: Updated "/home/core/.ssh/authorized_keys" Oct 8 20:19:30.618464 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Oct 8 20:19:30.627043 systemd[1]: Finished sshkeys.service. Oct 8 20:19:30.633464 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 8 20:19:30.634172 systemd[1]: Startup finished in 16.272s (kernel) + 12.084s (userspace) = 28.356s. Oct 8 20:19:31.220567 sshd[1716]: pam_unix(sshd:session): session closed for user core Oct 8 20:19:31.227496 systemd[1]: sshd@2-172.24.4.55:22-172.24.4.1:43746.service: Deactivated successfully. Oct 8 20:19:31.233598 systemd-logind[1572]: Session 5 logged out. Waiting for processes to exit. Oct 8 20:19:31.234647 systemd[1]: session-5.scope: Deactivated successfully. Oct 8 20:19:31.237185 systemd-logind[1572]: Removed session 5. Oct 8 20:19:36.450376 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 8 20:19:36.457298 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:19:36.880299 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:19:36.896661 (kubelet)[1784]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:19:37.123309 kubelet[1784]: E1008 20:19:37.123152 1784 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:19:37.132903 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:19:37.134448 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:19:41.234460 systemd[1]: Started sshd@3-172.24.4.55:22-172.24.4.1:55408.service - OpenSSH per-connection server daemon (172.24.4.1:55408). Oct 8 20:19:42.755066 sshd[1794]: Accepted publickey for core from 172.24.4.1 port 55408 ssh2: RSA SHA256:N4tAxOYyt600zP8LzVHN9krjQqk3csZTCmZq/eMm2uA Oct 8 20:19:42.757989 sshd[1794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:19:42.769017 systemd-logind[1572]: New session 6 of user core. Oct 8 20:19:42.775499 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 8 20:19:43.404358 sshd[1794]: pam_unix(sshd:session): session closed for user core Oct 8 20:19:43.414565 systemd[1]: Started sshd@4-172.24.4.55:22-172.24.4.1:55418.service - OpenSSH per-connection server daemon (172.24.4.1:55418). Oct 8 20:19:43.415943 systemd[1]: sshd@3-172.24.4.55:22-172.24.4.1:55408.service: Deactivated successfully. Oct 8 20:19:43.424265 systemd[1]: session-6.scope: Deactivated successfully. Oct 8 20:19:43.427796 systemd-logind[1572]: Session 6 logged out. Waiting for processes to exit. Oct 8 20:19:43.431509 systemd-logind[1572]: Removed session 6. Oct 8 20:19:44.966846 sshd[1799]: Accepted publickey for core from 172.24.4.1 port 55418 ssh2: RSA SHA256:N4tAxOYyt600zP8LzVHN9krjQqk3csZTCmZq/eMm2uA Oct 8 20:19:44.970373 sshd[1799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:19:44.982865 systemd-logind[1572]: New session 7 of user core. Oct 8 20:19:44.995706 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 8 20:19:45.760333 sshd[1799]: pam_unix(sshd:session): session closed for user core Oct 8 20:19:45.771591 systemd[1]: Started sshd@5-172.24.4.55:22-172.24.4.1:55062.service - OpenSSH per-connection server daemon (172.24.4.1:55062). Oct 8 20:19:45.772768 systemd[1]: sshd@4-172.24.4.55:22-172.24.4.1:55418.service: Deactivated successfully. Oct 8 20:19:45.788852 systemd[1]: session-7.scope: Deactivated successfully. Oct 8 20:19:45.793116 systemd-logind[1572]: Session 7 logged out. Waiting for processes to exit. Oct 8 20:19:45.795852 systemd-logind[1572]: Removed session 7. Oct 8 20:19:47.161050 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 8 20:19:47.170369 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:19:47.501179 sshd[1807]: Accepted publickey for core from 172.24.4.1 port 55062 ssh2: RSA SHA256:N4tAxOYyt600zP8LzVHN9krjQqk3csZTCmZq/eMm2uA Oct 8 20:19:47.503919 sshd[1807]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:19:47.515507 systemd-logind[1572]: New session 8 of user core. Oct 8 20:19:47.527561 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 8 20:19:47.725274 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:19:47.760743 (kubelet)[1826]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:19:47.862599 kubelet[1826]: E1008 20:19:47.862493 1826 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:19:47.867809 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:19:47.868325 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:19:48.134350 sshd[1807]: pam_unix(sshd:session): session closed for user core Oct 8 20:19:48.148697 systemd[1]: Started sshd@6-172.24.4.55:22-172.24.4.1:55064.service - OpenSSH per-connection server daemon (172.24.4.1:55064). Oct 8 20:19:48.151442 systemd[1]: sshd@5-172.24.4.55:22-172.24.4.1:55062.service: Deactivated successfully. Oct 8 20:19:48.155697 systemd[1]: session-8.scope: Deactivated successfully. Oct 8 20:19:48.161835 systemd-logind[1572]: Session 8 logged out. Waiting for processes to exit. Oct 8 20:19:48.164644 systemd-logind[1572]: Removed session 8. Oct 8 20:19:49.498704 sshd[1837]: Accepted publickey for core from 172.24.4.1 port 55064 ssh2: RSA SHA256:N4tAxOYyt600zP8LzVHN9krjQqk3csZTCmZq/eMm2uA Oct 8 20:19:49.502299 sshd[1837]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:19:49.516112 systemd-logind[1572]: New session 9 of user core. Oct 8 20:19:49.526500 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 8 20:19:50.045559 sudo[1844]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 8 20:19:50.046340 sudo[1844]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 20:19:50.074621 sudo[1844]: pam_unix(sudo:session): session closed for user root Oct 8 20:19:50.263858 sshd[1837]: pam_unix(sshd:session): session closed for user core Oct 8 20:19:50.276635 systemd[1]: Started sshd@7-172.24.4.55:22-172.24.4.1:55076.service - OpenSSH per-connection server daemon (172.24.4.1:55076). Oct 8 20:19:50.277672 systemd[1]: sshd@6-172.24.4.55:22-172.24.4.1:55064.service: Deactivated successfully. Oct 8 20:19:50.288548 systemd-logind[1572]: Session 9 logged out. Waiting for processes to exit. Oct 8 20:19:50.293337 systemd[1]: session-9.scope: Deactivated successfully. Oct 8 20:19:50.296120 systemd-logind[1572]: Removed session 9. Oct 8 20:19:51.784504 sshd[1846]: Accepted publickey for core from 172.24.4.1 port 55076 ssh2: RSA SHA256:N4tAxOYyt600zP8LzVHN9krjQqk3csZTCmZq/eMm2uA Oct 8 20:19:51.787288 sshd[1846]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:19:51.799105 systemd-logind[1572]: New session 10 of user core. Oct 8 20:19:51.808618 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 8 20:19:52.273078 sudo[1854]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 8 20:19:52.274671 sudo[1854]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 20:19:52.288494 sudo[1854]: pam_unix(sudo:session): session closed for user root Oct 8 20:19:52.300316 sudo[1853]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 8 20:19:52.301073 sudo[1853]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 20:19:52.330473 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Oct 8 20:19:52.336234 auditctl[1857]: No rules Oct 8 20:19:52.337155 systemd[1]: audit-rules.service: Deactivated successfully. Oct 8 20:19:52.337695 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Oct 8 20:19:52.352906 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 8 20:19:52.406453 augenrules[1876]: No rules Oct 8 20:19:52.409563 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 8 20:19:52.412367 sudo[1853]: pam_unix(sudo:session): session closed for user root Oct 8 20:19:52.586543 sshd[1846]: pam_unix(sshd:session): session closed for user core Oct 8 20:19:52.600226 systemd[1]: Started sshd@8-172.24.4.55:22-172.24.4.1:55086.service - OpenSSH per-connection server daemon (172.24.4.1:55086). Oct 8 20:19:52.601336 systemd[1]: sshd@7-172.24.4.55:22-172.24.4.1:55076.service: Deactivated successfully. Oct 8 20:19:52.613622 systemd[1]: session-10.scope: Deactivated successfully. Oct 8 20:19:52.615080 systemd-logind[1572]: Session 10 logged out. Waiting for processes to exit. Oct 8 20:19:52.620409 systemd-logind[1572]: Removed session 10. Oct 8 20:19:54.090872 sshd[1882]: Accepted publickey for core from 172.24.4.1 port 55086 ssh2: RSA SHA256:N4tAxOYyt600zP8LzVHN9krjQqk3csZTCmZq/eMm2uA Oct 8 20:19:54.093695 sshd[1882]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:19:54.105348 systemd-logind[1572]: New session 11 of user core. Oct 8 20:19:54.116509 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 8 20:19:54.538927 sudo[1889]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 8 20:19:54.540464 sudo[1889]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 20:19:55.229079 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 8 20:19:55.235806 (dockerd)[1905]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 8 20:19:55.874268 dockerd[1905]: time="2024-10-08T20:19:55.874170008Z" level=info msg="Starting up" Oct 8 20:19:56.054715 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2261199791-merged.mount: Deactivated successfully. Oct 8 20:19:56.396761 systemd[1]: var-lib-docker-metacopy\x2dcheck1274507633-merged.mount: Deactivated successfully. Oct 8 20:19:56.454323 dockerd[1905]: time="2024-10-08T20:19:56.453565543Z" level=info msg="Loading containers: start." Oct 8 20:19:56.636044 kernel: Initializing XFRM netlink socket Oct 8 20:19:56.786540 systemd-networkd[1198]: docker0: Link UP Oct 8 20:19:56.812650 dockerd[1905]: time="2024-10-08T20:19:56.812557025Z" level=info msg="Loading containers: done." Oct 8 20:19:56.832728 dockerd[1905]: time="2024-10-08T20:19:56.832484783Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 8 20:19:56.833196 dockerd[1905]: time="2024-10-08T20:19:56.832782700Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Oct 8 20:19:56.833196 dockerd[1905]: time="2024-10-08T20:19:56.832900981Z" level=info msg="Daemon has completed initialization" Oct 8 20:19:56.897823 dockerd[1905]: time="2024-10-08T20:19:56.894814529Z" level=info msg="API listen on /run/docker.sock" Oct 8 20:19:56.897133 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 8 20:19:57.877752 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Oct 8 20:19:57.885104 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:19:58.109187 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:19:58.118440 (kubelet)[2056]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:19:58.187285 kubelet[2056]: E1008 20:19:58.187163 2056 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:19:58.189789 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:19:58.190219 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:19:58.665025 containerd[1597]: time="2024-10-08T20:19:58.664929229Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.9\"" Oct 8 20:19:59.352169 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2776617634.mount: Deactivated successfully. Oct 8 20:20:01.410861 containerd[1597]: time="2024-10-08T20:20:01.409761114Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:20:01.417686 containerd[1597]: time="2024-10-08T20:20:01.417632449Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.9: active requests=0, bytes read=35213849" Oct 8 20:20:01.419401 containerd[1597]: time="2024-10-08T20:20:01.419356276Z" level=info msg="ImageCreate event name:\"sha256:bc1ec5c2b6c60a3b18e7f54a99f0452c038400ecaaa2576931fd5342a0586abb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:20:01.422355 containerd[1597]: time="2024-10-08T20:20:01.422275771Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b88538e7fdf73583c8670540eec5b3620af75c9ec200434a5815ee7fba5021f3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:20:01.423524 containerd[1597]: time="2024-10-08T20:20:01.423341736Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.9\" with image id \"sha256:bc1ec5c2b6c60a3b18e7f54a99f0452c038400ecaaa2576931fd5342a0586abb\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b88538e7fdf73583c8670540eec5b3620af75c9ec200434a5815ee7fba5021f3\", size \"35210641\" in 2.75808824s" Oct 8 20:20:01.423524 containerd[1597]: time="2024-10-08T20:20:01.423377974Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.9\" returns image reference \"sha256:bc1ec5c2b6c60a3b18e7f54a99f0452c038400ecaaa2576931fd5342a0586abb\"" Oct 8 20:20:01.449198 containerd[1597]: time="2024-10-08T20:20:01.448978949Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.9\"" Oct 8 20:20:03.788166 containerd[1597]: time="2024-10-08T20:20:03.787981831Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:20:03.789288 containerd[1597]: time="2024-10-08T20:20:03.789245076Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.9: active requests=0, bytes read=32208681" Oct 8 20:20:03.790606 containerd[1597]: time="2024-10-08T20:20:03.790542836Z" level=info msg="ImageCreate event name:\"sha256:5abda0d0a9153cd1f90fd828be379f7a16a6c814e6efbbbf31e247e13c3843e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:20:03.793785 containerd[1597]: time="2024-10-08T20:20:03.793740323Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f2f18973ccb6996687d10ba5bd1b8f303e3dd2fed80f831a44d2ac8191e5bb9b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:20:03.795499 containerd[1597]: time="2024-10-08T20:20:03.794912569Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.9\" with image id \"sha256:5abda0d0a9153cd1f90fd828be379f7a16a6c814e6efbbbf31e247e13c3843e5\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f2f18973ccb6996687d10ba5bd1b8f303e3dd2fed80f831a44d2ac8191e5bb9b\", size \"33739229\" in 2.345896722s" Oct 8 20:20:03.795499 containerd[1597]: time="2024-10-08T20:20:03.794943547Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.9\" returns image reference \"sha256:5abda0d0a9153cd1f90fd828be379f7a16a6c814e6efbbbf31e247e13c3843e5\"" Oct 8 20:20:03.818649 containerd[1597]: time="2024-10-08T20:20:03.818613746Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.9\"" Oct 8 20:20:05.769535 containerd[1597]: time="2024-10-08T20:20:05.769419722Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:20:05.774189 containerd[1597]: time="2024-10-08T20:20:05.773766853Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.9: active requests=0, bytes read=17320464" Oct 8 20:20:05.777197 containerd[1597]: time="2024-10-08T20:20:05.777014615Z" level=info msg="ImageCreate event name:\"sha256:059957505b3370d4c57d793e79cc70f9063d7ab75767f7040f5cc85572fe7e8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:20:05.784761 containerd[1597]: time="2024-10-08T20:20:05.784667477Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9c164076eebaefdaebad46a5ccd550e9f38c63588c02d35163c6a09e164ab8a8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:20:05.788218 containerd[1597]: time="2024-10-08T20:20:05.787687072Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.9\" with image id \"sha256:059957505b3370d4c57d793e79cc70f9063d7ab75767f7040f5cc85572fe7e8d\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9c164076eebaefdaebad46a5ccd550e9f38c63588c02d35163c6a09e164ab8a8\", size \"18851030\" in 1.968861819s" Oct 8 20:20:05.788218 containerd[1597]: time="2024-10-08T20:20:05.787776709Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.9\" returns image reference \"sha256:059957505b3370d4c57d793e79cc70f9063d7ab75767f7040f5cc85572fe7e8d\"" Oct 8 20:20:05.837997 containerd[1597]: time="2024-10-08T20:20:05.837886274Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.9\"" Oct 8 20:20:07.484348 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2820592731.mount: Deactivated successfully. Oct 8 20:20:08.291051 update_engine[1579]: I20241008 20:20:08.290253 1579 update_attempter.cc:509] Updating boot flags... Oct 8 20:20:08.378446 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Oct 8 20:20:08.388547 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:20:08.496052 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2168) Oct 8 20:20:08.587732 containerd[1597]: time="2024-10-08T20:20:08.587621105Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:20:08.877279 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:20:08.893858 (kubelet)[2182]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:20:08.947730 containerd[1597]: time="2024-10-08T20:20:08.946683661Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.9: active requests=0, bytes read=28601758" Oct 8 20:20:08.960797 containerd[1597]: time="2024-10-08T20:20:08.959568698Z" level=info msg="ImageCreate event name:\"sha256:dd650d127e51776919ec1622a4469a8b141b2dfee5a33fbc5cb9729372e0dcfa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:20:08.970331 containerd[1597]: time="2024-10-08T20:20:08.969896304Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:124040dbe6b5294352355f5d34c692ecbc940cdc57a8fd06d0f38f76b6138906\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:20:08.981114 containerd[1597]: time="2024-10-08T20:20:08.979752197Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.9\" with image id \"sha256:dd650d127e51776919ec1622a4469a8b141b2dfee5a33fbc5cb9729372e0dcfa\", repo tag \"registry.k8s.io/kube-proxy:v1.29.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:124040dbe6b5294352355f5d34c692ecbc940cdc57a8fd06d0f38f76b6138906\", size \"28600769\" in 3.141787206s" Oct 8 20:20:08.981114 containerd[1597]: time="2024-10-08T20:20:08.979798143Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.9\" returns image reference \"sha256:dd650d127e51776919ec1622a4469a8b141b2dfee5a33fbc5cb9729372e0dcfa\"" Oct 8 20:20:08.999290 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2170) Oct 8 20:20:09.039838 containerd[1597]: time="2024-10-08T20:20:09.039794216Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Oct 8 20:20:09.068069 kubelet[2182]: E1008 20:20:09.067999 2182 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:20:09.072766 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:20:09.072925 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:20:09.092769 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2170) Oct 8 20:20:09.645916 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3546092899.mount: Deactivated successfully. Oct 8 20:20:10.853873 containerd[1597]: time="2024-10-08T20:20:10.853710554Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:20:10.855344 containerd[1597]: time="2024-10-08T20:20:10.855063588Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Oct 8 20:20:10.856417 containerd[1597]: time="2024-10-08T20:20:10.856353706Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:20:10.859947 containerd[1597]: time="2024-10-08T20:20:10.859896263Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:20:10.861509 containerd[1597]: time="2024-10-08T20:20:10.861304351Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.821459379s" Oct 8 20:20:10.861509 containerd[1597]: time="2024-10-08T20:20:10.861335960Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Oct 8 20:20:10.888069 containerd[1597]: time="2024-10-08T20:20:10.887173738Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Oct 8 20:20:11.421188 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2807771762.mount: Deactivated successfully. Oct 8 20:20:11.430014 containerd[1597]: time="2024-10-08T20:20:11.429877301Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:20:11.431807 containerd[1597]: time="2024-10-08T20:20:11.431717209Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Oct 8 20:20:11.433660 containerd[1597]: time="2024-10-08T20:20:11.433503656Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:20:11.440735 containerd[1597]: time="2024-10-08T20:20:11.440584182Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:20:11.443370 containerd[1597]: time="2024-10-08T20:20:11.442545828Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 554.324558ms" Oct 8 20:20:11.443370 containerd[1597]: time="2024-10-08T20:20:11.442623152Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Oct 8 20:20:11.484470 containerd[1597]: time="2024-10-08T20:20:11.484403575Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Oct 8 20:20:12.142759 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount515137418.mount: Deactivated successfully. Oct 8 20:20:15.587853 containerd[1597]: time="2024-10-08T20:20:15.587723284Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:20:15.590430 containerd[1597]: time="2024-10-08T20:20:15.590090060Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651633" Oct 8 20:20:15.591578 containerd[1597]: time="2024-10-08T20:20:15.591515732Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:20:15.595130 containerd[1597]: time="2024-10-08T20:20:15.595049214Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:20:15.598658 containerd[1597]: time="2024-10-08T20:20:15.598621838Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 4.114152571s" Oct 8 20:20:15.598720 containerd[1597]: time="2024-10-08T20:20:15.598662665Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Oct 8 20:20:19.128140 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Oct 8 20:20:19.137070 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:20:19.517189 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:20:19.528355 (kubelet)[2366]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:20:19.628970 kubelet[2366]: E1008 20:20:19.625284 2366 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:20:19.629581 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:20:19.629761 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:20:20.484323 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:20:20.496502 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:20:20.527697 systemd[1]: Reloading requested from client PID 2384 ('systemctl') (unit session-11.scope)... Oct 8 20:20:20.527735 systemd[1]: Reloading... Oct 8 20:20:20.630991 zram_generator::config[2419]: No configuration found. Oct 8 20:20:20.816097 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 20:20:20.892521 systemd[1]: Reloading finished in 363 ms. Oct 8 20:20:20.936814 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 8 20:20:20.937084 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 8 20:20:20.937565 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:20:20.948547 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:20:21.316678 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:20:21.331664 (kubelet)[2499]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 8 20:20:21.392511 kubelet[2499]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 20:20:21.392511 kubelet[2499]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 8 20:20:21.392511 kubelet[2499]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 20:20:21.393285 kubelet[2499]: I1008 20:20:21.392532 2499 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 8 20:20:22.321304 kubelet[2499]: I1008 20:20:22.321190 2499 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Oct 8 20:20:22.321304 kubelet[2499]: I1008 20:20:22.321234 2499 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 8 20:20:22.322350 kubelet[2499]: I1008 20:20:22.321460 2499 server.go:919] "Client rotation is on, will bootstrap in background" Oct 8 20:20:22.353111 kubelet[2499]: I1008 20:20:22.352516 2499 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 8 20:20:22.354594 kubelet[2499]: E1008 20:20:22.354556 2499 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.55:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.55:6443: connect: connection refused Oct 8 20:20:22.375032 kubelet[2499]: I1008 20:20:22.374939 2499 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 8 20:20:22.376200 kubelet[2499]: I1008 20:20:22.376154 2499 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 8 20:20:22.378725 kubelet[2499]: I1008 20:20:22.378616 2499 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 8 20:20:22.380029 kubelet[2499]: I1008 20:20:22.379948 2499 topology_manager.go:138] "Creating topology manager with none policy" Oct 8 20:20:22.380079 kubelet[2499]: I1008 20:20:22.380035 2499 container_manager_linux.go:301] "Creating device plugin manager" Oct 8 20:20:22.380362 kubelet[2499]: I1008 20:20:22.380318 2499 state_mem.go:36] "Initialized new in-memory state store" Oct 8 20:20:22.380559 kubelet[2499]: I1008 20:20:22.380527 2499 kubelet.go:396] "Attempting to sync node with API server" Oct 8 20:20:22.380605 kubelet[2499]: I1008 20:20:22.380572 2499 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 8 20:20:22.380973 kubelet[2499]: I1008 20:20:22.380644 2499 kubelet.go:312] "Adding apiserver pod source" Oct 8 20:20:22.380973 kubelet[2499]: I1008 20:20:22.380706 2499 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 8 20:20:22.381338 kubelet[2499]: W1008 20:20:22.381295 2499 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.24.4.55:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-1-0-6-0b75032dd1.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.55:6443: connect: connection refused Oct 8 20:20:22.381423 kubelet[2499]: E1008 20:20:22.381411 2499 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.55:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-1-0-6-0b75032dd1.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.55:6443: connect: connection refused Oct 8 20:20:22.383945 kubelet[2499]: W1008 20:20:22.383861 2499 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.24.4.55:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.55:6443: connect: connection refused Oct 8 20:20:22.384038 kubelet[2499]: E1008 20:20:22.384004 2499 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.55:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.55:6443: connect: connection refused Oct 8 20:20:22.384248 kubelet[2499]: I1008 20:20:22.384214 2499 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Oct 8 20:20:22.394624 kubelet[2499]: I1008 20:20:22.394552 2499 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 8 20:20:22.394975 kubelet[2499]: W1008 20:20:22.394725 2499 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 8 20:20:22.396301 kubelet[2499]: I1008 20:20:22.396165 2499 server.go:1256] "Started kubelet" Oct 8 20:20:22.399890 kubelet[2499]: I1008 20:20:22.399174 2499 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 8 20:20:22.404065 kubelet[2499]: E1008 20:20:22.404015 2499 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.55:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.55:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-1-0-6-0b75032dd1.novalocal.17fc93c2ec04c2d2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-1-0-6-0b75032dd1.novalocal,UID:ci-4081-1-0-6-0b75032dd1.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-1-0-6-0b75032dd1.novalocal,},FirstTimestamp:2024-10-08 20:20:22.39610133 +0000 UTC m=+1.060206244,LastTimestamp:2024-10-08 20:20:22.39610133 +0000 UTC m=+1.060206244,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-1-0-6-0b75032dd1.novalocal,}" Oct 8 20:20:22.407066 kubelet[2499]: I1008 20:20:22.407035 2499 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Oct 8 20:20:22.409511 kubelet[2499]: I1008 20:20:22.409495 2499 server.go:461] "Adding debug handlers to kubelet server" Oct 8 20:20:22.410771 kubelet[2499]: I1008 20:20:22.410739 2499 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 8 20:20:22.411081 kubelet[2499]: I1008 20:20:22.411070 2499 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 8 20:20:22.414555 kubelet[2499]: I1008 20:20:22.414226 2499 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 8 20:20:22.415080 kubelet[2499]: I1008 20:20:22.415068 2499 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Oct 8 20:20:22.415209 kubelet[2499]: I1008 20:20:22.415199 2499 reconciler_new.go:29] "Reconciler: start to sync state" Oct 8 20:20:22.416236 kubelet[2499]: W1008 20:20:22.416105 2499 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.24.4.55:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.55:6443: connect: connection refused Oct 8 20:20:22.416576 kubelet[2499]: E1008 20:20:22.416562 2499 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.55:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.55:6443: connect: connection refused Oct 8 20:20:22.417194 kubelet[2499]: E1008 20:20:22.417181 2499 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-1-0-6-0b75032dd1.novalocal?timeout=10s\": dial tcp 172.24.4.55:6443: connect: connection refused" interval="200ms" Oct 8 20:20:22.417918 kubelet[2499]: E1008 20:20:22.417783 2499 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 8 20:20:22.418610 kubelet[2499]: I1008 20:20:22.418219 2499 factory.go:221] Registration of the systemd container factory successfully Oct 8 20:20:22.418610 kubelet[2499]: I1008 20:20:22.418304 2499 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 8 20:20:22.419822 kubelet[2499]: I1008 20:20:22.419798 2499 factory.go:221] Registration of the containerd container factory successfully Oct 8 20:20:22.453915 kubelet[2499]: I1008 20:20:22.453872 2499 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 8 20:20:22.455562 kubelet[2499]: I1008 20:20:22.455522 2499 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 8 20:20:22.455624 kubelet[2499]: I1008 20:20:22.455606 2499 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 8 20:20:22.455679 kubelet[2499]: I1008 20:20:22.455663 2499 kubelet.go:2329] "Starting kubelet main sync loop" Oct 8 20:20:22.455757 kubelet[2499]: E1008 20:20:22.455740 2499 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 8 20:20:22.458496 kubelet[2499]: W1008 20:20:22.458439 2499 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.24.4.55:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.55:6443: connect: connection refused Oct 8 20:20:22.458496 kubelet[2499]: E1008 20:20:22.458474 2499 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.55:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.55:6443: connect: connection refused Oct 8 20:20:22.460860 kubelet[2499]: I1008 20:20:22.460817 2499 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 8 20:20:22.460860 kubelet[2499]: I1008 20:20:22.460837 2499 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 8 20:20:22.460860 kubelet[2499]: I1008 20:20:22.460852 2499 state_mem.go:36] "Initialized new in-memory state store" Oct 8 20:20:22.465399 kubelet[2499]: I1008 20:20:22.465367 2499 policy_none.go:49] "None policy: Start" Oct 8 20:20:22.466174 kubelet[2499]: I1008 20:20:22.466149 2499 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 8 20:20:22.466222 kubelet[2499]: I1008 20:20:22.466197 2499 state_mem.go:35] "Initializing new in-memory state store" Oct 8 20:20:22.472595 kubelet[2499]: I1008 20:20:22.471559 2499 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 8 20:20:22.472595 kubelet[2499]: I1008 20:20:22.471843 2499 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 8 20:20:22.475002 kubelet[2499]: E1008 20:20:22.474869 2499 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-1-0-6-0b75032dd1.novalocal\" not found" Oct 8 20:20:22.517485 kubelet[2499]: I1008 20:20:22.517389 2499 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-1-0-6-0b75032dd1.novalocal" Oct 8 20:20:22.517826 kubelet[2499]: E1008 20:20:22.517787 2499 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.55:6443/api/v1/nodes\": dial tcp 172.24.4.55:6443: connect: connection refused" node="ci-4081-1-0-6-0b75032dd1.novalocal" Oct 8 20:20:22.556684 kubelet[2499]: I1008 20:20:22.556220 2499 topology_manager.go:215] "Topology Admit Handler" podUID="dcc5af4539f1db18133e2e3492a4ee3e" podNamespace="kube-system" podName="kube-apiserver-ci-4081-1-0-6-0b75032dd1.novalocal" Oct 8 20:20:22.561397 kubelet[2499]: I1008 20:20:22.560792 2499 topology_manager.go:215] "Topology Admit Handler" podUID="8744de58934a4363cb718d9c37226056" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-1-0-6-0b75032dd1.novalocal" Oct 8 20:20:22.565585 kubelet[2499]: I1008 20:20:22.565242 2499 topology_manager.go:215] "Topology Admit Handler" podUID="328ff33da7610b6dc4d7888847bcef7b" podNamespace="kube-system" podName="kube-scheduler-ci-4081-1-0-6-0b75032dd1.novalocal" Oct 8 20:20:22.618848 kubelet[2499]: E1008 20:20:22.618576 2499 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-1-0-6-0b75032dd1.novalocal?timeout=10s\": dial tcp 172.24.4.55:6443: connect: connection refused" interval="400ms" Oct 8 20:20:22.626177 kubelet[2499]: I1008 20:20:22.626095 2499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8744de58934a4363cb718d9c37226056-ca-certs\") pod \"kube-controller-manager-ci-4081-1-0-6-0b75032dd1.novalocal\" (UID: \"8744de58934a4363cb718d9c37226056\") " pod="kube-system/kube-controller-manager-ci-4081-1-0-6-0b75032dd1.novalocal" Oct 8 20:20:22.627015 kubelet[2499]: I1008 20:20:22.626224 2499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8744de58934a4363cb718d9c37226056-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-1-0-6-0b75032dd1.novalocal\" (UID: \"8744de58934a4363cb718d9c37226056\") " pod="kube-system/kube-controller-manager-ci-4081-1-0-6-0b75032dd1.novalocal" Oct 8 20:20:22.627015 kubelet[2499]: I1008 20:20:22.626291 2499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8744de58934a4363cb718d9c37226056-k8s-certs\") pod \"kube-controller-manager-ci-4081-1-0-6-0b75032dd1.novalocal\" (UID: \"8744de58934a4363cb718d9c37226056\") " pod="kube-system/kube-controller-manager-ci-4081-1-0-6-0b75032dd1.novalocal" Oct 8 20:20:22.627015 kubelet[2499]: I1008 20:20:22.626467 2499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8744de58934a4363cb718d9c37226056-kubeconfig\") pod \"kube-controller-manager-ci-4081-1-0-6-0b75032dd1.novalocal\" (UID: \"8744de58934a4363cb718d9c37226056\") " pod="kube-system/kube-controller-manager-ci-4081-1-0-6-0b75032dd1.novalocal" Oct 8 20:20:22.627015 kubelet[2499]: I1008 20:20:22.626550 2499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dcc5af4539f1db18133e2e3492a4ee3e-k8s-certs\") pod \"kube-apiserver-ci-4081-1-0-6-0b75032dd1.novalocal\" (UID: \"dcc5af4539f1db18133e2e3492a4ee3e\") " pod="kube-system/kube-apiserver-ci-4081-1-0-6-0b75032dd1.novalocal" Oct 8 20:20:22.627310 kubelet[2499]: I1008 20:20:22.626630 2499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dcc5af4539f1db18133e2e3492a4ee3e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-1-0-6-0b75032dd1.novalocal\" (UID: \"dcc5af4539f1db18133e2e3492a4ee3e\") " pod="kube-system/kube-apiserver-ci-4081-1-0-6-0b75032dd1.novalocal" Oct 8 20:20:22.627310 kubelet[2499]: I1008 20:20:22.626697 2499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8744de58934a4363cb718d9c37226056-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-1-0-6-0b75032dd1.novalocal\" (UID: \"8744de58934a4363cb718d9c37226056\") " pod="kube-system/kube-controller-manager-ci-4081-1-0-6-0b75032dd1.novalocal" Oct 8 20:20:22.627310 kubelet[2499]: I1008 20:20:22.626759 2499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/328ff33da7610b6dc4d7888847bcef7b-kubeconfig\") pod \"kube-scheduler-ci-4081-1-0-6-0b75032dd1.novalocal\" (UID: \"328ff33da7610b6dc4d7888847bcef7b\") " pod="kube-system/kube-scheduler-ci-4081-1-0-6-0b75032dd1.novalocal" Oct 8 20:20:22.627310 kubelet[2499]: I1008 20:20:22.626819 2499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dcc5af4539f1db18133e2e3492a4ee3e-ca-certs\") pod \"kube-apiserver-ci-4081-1-0-6-0b75032dd1.novalocal\" (UID: \"dcc5af4539f1db18133e2e3492a4ee3e\") " pod="kube-system/kube-apiserver-ci-4081-1-0-6-0b75032dd1.novalocal" Oct 8 20:20:22.721612 kubelet[2499]: I1008 20:20:22.721571 2499 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-1-0-6-0b75032dd1.novalocal" Oct 8 20:20:22.722613 kubelet[2499]: E1008 20:20:22.722547 2499 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.55:6443/api/v1/nodes\": dial tcp 172.24.4.55:6443: connect: connection refused" node="ci-4081-1-0-6-0b75032dd1.novalocal" Oct 8 20:20:22.880527 containerd[1597]: time="2024-10-08T20:20:22.880378482Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-1-0-6-0b75032dd1.novalocal,Uid:dcc5af4539f1db18133e2e3492a4ee3e,Namespace:kube-system,Attempt:0,}" Oct 8 20:20:22.882534 containerd[1597]: time="2024-10-08T20:20:22.882341060Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-1-0-6-0b75032dd1.novalocal,Uid:8744de58934a4363cb718d9c37226056,Namespace:kube-system,Attempt:0,}" Oct 8 20:20:22.898057 containerd[1597]: time="2024-10-08T20:20:22.897666429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-1-0-6-0b75032dd1.novalocal,Uid:328ff33da7610b6dc4d7888847bcef7b,Namespace:kube-system,Attempt:0,}" Oct 8 20:20:23.020255 kubelet[2499]: E1008 20:20:23.020192 2499 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-1-0-6-0b75032dd1.novalocal?timeout=10s\": dial tcp 172.24.4.55:6443: connect: connection refused" interval="800ms" Oct 8 20:20:23.126803 kubelet[2499]: I1008 20:20:23.126711 2499 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-1-0-6-0b75032dd1.novalocal" Oct 8 20:20:23.127381 kubelet[2499]: E1008 20:20:23.127345 2499 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.55:6443/api/v1/nodes\": dial tcp 172.24.4.55:6443: connect: connection refused" node="ci-4081-1-0-6-0b75032dd1.novalocal" Oct 8 20:20:23.207640 kubelet[2499]: W1008 20:20:23.207361 2499 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.24.4.55:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.55:6443: connect: connection refused Oct 8 20:20:23.207640 kubelet[2499]: E1008 20:20:23.207489 2499 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.55:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.55:6443: connect: connection refused Oct 8 20:20:23.211830 kubelet[2499]: W1008 20:20:23.211668 2499 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.24.4.55:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-1-0-6-0b75032dd1.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.55:6443: connect: connection refused Oct 8 20:20:23.211830 kubelet[2499]: E1008 20:20:23.211769 2499 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.55:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-1-0-6-0b75032dd1.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.55:6443: connect: connection refused Oct 8 20:20:23.822101 kubelet[2499]: E1008 20:20:23.822032 2499 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-1-0-6-0b75032dd1.novalocal?timeout=10s\": dial tcp 172.24.4.55:6443: connect: connection refused" interval="1.6s" Oct 8 20:20:23.913870 kubelet[2499]: W1008 20:20:23.913692 2499 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.24.4.55:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.55:6443: connect: connection refused Oct 8 20:20:23.913870 kubelet[2499]: E1008 20:20:23.913825 2499 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.55:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.55:6443: connect: connection refused Oct 8 20:20:23.931328 kubelet[2499]: I1008 20:20:23.931179 2499 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-1-0-6-0b75032dd1.novalocal" Oct 8 20:20:23.931929 kubelet[2499]: E1008 20:20:23.931873 2499 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.55:6443/api/v1/nodes\": dial tcp 172.24.4.55:6443: connect: connection refused" node="ci-4081-1-0-6-0b75032dd1.novalocal" Oct 8 20:20:23.969304 kubelet[2499]: W1008 20:20:23.969182 2499 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.24.4.55:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.55:6443: connect: connection refused Oct 8 20:20:23.969304 kubelet[2499]: E1008 20:20:23.969265 2499 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.55:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.55:6443: connect: connection refused Oct 8 20:20:24.129900 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2257783690.mount: Deactivated successfully. Oct 8 20:20:24.145838 containerd[1597]: time="2024-10-08T20:20:24.145716705Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 20:20:24.148015 containerd[1597]: time="2024-10-08T20:20:24.147898416Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 20:20:24.150694 containerd[1597]: time="2024-10-08T20:20:24.150521143Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Oct 8 20:20:24.150694 containerd[1597]: time="2024-10-08T20:20:24.150637310Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 8 20:20:24.151921 containerd[1597]: time="2024-10-08T20:20:24.151713939Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 20:20:24.155052 containerd[1597]: time="2024-10-08T20:20:24.154819801Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 8 20:20:24.159381 containerd[1597]: time="2024-10-08T20:20:24.159282998Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 20:20:24.166384 containerd[1597]: time="2024-10-08T20:20:24.165363939Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.284724427s" Oct 8 20:20:24.169780 containerd[1597]: time="2024-10-08T20:20:24.167476018Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 20:20:24.173657 containerd[1597]: time="2024-10-08T20:20:24.173573188Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.291082858s" Oct 8 20:20:24.175390 containerd[1597]: time="2024-10-08T20:20:24.175241336Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.27740069s" Oct 8 20:20:24.454108 containerd[1597]: time="2024-10-08T20:20:24.453895268Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:20:24.454491 containerd[1597]: time="2024-10-08T20:20:24.454318733Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:20:24.454491 containerd[1597]: time="2024-10-08T20:20:24.454381239Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:20:24.454690 containerd[1597]: time="2024-10-08T20:20:24.454646887Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:20:24.457565 containerd[1597]: time="2024-10-08T20:20:24.457301804Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:20:24.457565 containerd[1597]: time="2024-10-08T20:20:24.457371856Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:20:24.457565 containerd[1597]: time="2024-10-08T20:20:24.457390912Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:20:24.457565 containerd[1597]: time="2024-10-08T20:20:24.457502170Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:20:24.473527 containerd[1597]: time="2024-10-08T20:20:24.471630527Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:20:24.473527 containerd[1597]: time="2024-10-08T20:20:24.471693605Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:20:24.473527 containerd[1597]: time="2024-10-08T20:20:24.471712381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:20:24.473527 containerd[1597]: time="2024-10-08T20:20:24.471846893Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:20:24.501323 kubelet[2499]: E1008 20:20:24.498779 2499 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.55:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.55:6443: connect: connection refused Oct 8 20:20:24.565446 containerd[1597]: time="2024-10-08T20:20:24.565407090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-1-0-6-0b75032dd1.novalocal,Uid:8744de58934a4363cb718d9c37226056,Namespace:kube-system,Attempt:0,} returns sandbox id \"8fc54e016954c3c931efac3014947f9f4121103fe8a3bd857a4f415b2aba13bd\"" Oct 8 20:20:24.569711 containerd[1597]: time="2024-10-08T20:20:24.568601889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-1-0-6-0b75032dd1.novalocal,Uid:328ff33da7610b6dc4d7888847bcef7b,Namespace:kube-system,Attempt:0,} returns sandbox id \"3a414dd5f6fbcb02c93b05430766f4aae9833f94df02f12d46bdd0771209561b\"" Oct 8 20:20:24.573627 containerd[1597]: time="2024-10-08T20:20:24.573317399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-1-0-6-0b75032dd1.novalocal,Uid:dcc5af4539f1db18133e2e3492a4ee3e,Namespace:kube-system,Attempt:0,} returns sandbox id \"24691493cacb55d512760b43cb84f8b26fc10522a6b1b17f8e0a5e355430df34\"" Oct 8 20:20:24.575886 containerd[1597]: time="2024-10-08T20:20:24.575839006Z" level=info msg="CreateContainer within sandbox \"3a414dd5f6fbcb02c93b05430766f4aae9833f94df02f12d46bdd0771209561b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 8 20:20:24.576501 containerd[1597]: time="2024-10-08T20:20:24.576371385Z" level=info msg="CreateContainer within sandbox \"8fc54e016954c3c931efac3014947f9f4121103fe8a3bd857a4f415b2aba13bd\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 8 20:20:24.578194 containerd[1597]: time="2024-10-08T20:20:24.578161440Z" level=info msg="CreateContainer within sandbox \"24691493cacb55d512760b43cb84f8b26fc10522a6b1b17f8e0a5e355430df34\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 8 20:20:24.628492 containerd[1597]: time="2024-10-08T20:20:24.628443203Z" level=info msg="CreateContainer within sandbox \"3a414dd5f6fbcb02c93b05430766f4aae9833f94df02f12d46bdd0771209561b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"cfbeb89b6bbfcd3d704ac4ac76c40dc14208720986aad76e33a631cc4684564c\"" Oct 8 20:20:24.629758 containerd[1597]: time="2024-10-08T20:20:24.629721059Z" level=info msg="StartContainer for \"cfbeb89b6bbfcd3d704ac4ac76c40dc14208720986aad76e33a631cc4684564c\"" Oct 8 20:20:24.637671 containerd[1597]: time="2024-10-08T20:20:24.637558061Z" level=info msg="CreateContainer within sandbox \"8fc54e016954c3c931efac3014947f9f4121103fe8a3bd857a4f415b2aba13bd\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"10d230b71687c100686c62d4c4ee747d420884d19aa6871b8e39eb135787ffd3\"" Oct 8 20:20:24.639165 containerd[1597]: time="2024-10-08T20:20:24.638224420Z" level=info msg="StartContainer for \"10d230b71687c100686c62d4c4ee747d420884d19aa6871b8e39eb135787ffd3\"" Oct 8 20:20:24.639701 containerd[1597]: time="2024-10-08T20:20:24.639679518Z" level=info msg="CreateContainer within sandbox \"24691493cacb55d512760b43cb84f8b26fc10522a6b1b17f8e0a5e355430df34\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"63a4d5be50daa57f35b7a0f0aff5b6eda9794f4d864cdf0d3a2bfa18eefc07f5\"" Oct 8 20:20:24.640301 containerd[1597]: time="2024-10-08T20:20:24.640261089Z" level=info msg="StartContainer for \"63a4d5be50daa57f35b7a0f0aff5b6eda9794f4d864cdf0d3a2bfa18eefc07f5\"" Oct 8 20:20:24.766079 containerd[1597]: time="2024-10-08T20:20:24.765809985Z" level=info msg="StartContainer for \"63a4d5be50daa57f35b7a0f0aff5b6eda9794f4d864cdf0d3a2bfa18eefc07f5\" returns successfully" Oct 8 20:20:24.766079 containerd[1597]: time="2024-10-08T20:20:24.766004019Z" level=info msg="StartContainer for \"cfbeb89b6bbfcd3d704ac4ac76c40dc14208720986aad76e33a631cc4684564c\" returns successfully" Oct 8 20:20:24.766079 containerd[1597]: time="2024-10-08T20:20:24.766038594Z" level=info msg="StartContainer for \"10d230b71687c100686c62d4c4ee747d420884d19aa6871b8e39eb135787ffd3\" returns successfully" Oct 8 20:20:25.076372 kubelet[2499]: E1008 20:20:25.076254 2499 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.55:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.55:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-1-0-6-0b75032dd1.novalocal.17fc93c2ec04c2d2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-1-0-6-0b75032dd1.novalocal,UID:ci-4081-1-0-6-0b75032dd1.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-1-0-6-0b75032dd1.novalocal,},FirstTimestamp:2024-10-08 20:20:22.39610133 +0000 UTC m=+1.060206244,LastTimestamp:2024-10-08 20:20:22.39610133 +0000 UTC m=+1.060206244,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-1-0-6-0b75032dd1.novalocal,}" Oct 8 20:20:25.101638 kubelet[2499]: W1008 20:20:25.100521 2499 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.24.4.55:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.55:6443: connect: connection refused Oct 8 20:20:25.101638 kubelet[2499]: E1008 20:20:25.100561 2499 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.55:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.55:6443: connect: connection refused Oct 8 20:20:25.537993 kubelet[2499]: I1008 20:20:25.536536 2499 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-1-0-6-0b75032dd1.novalocal" Oct 8 20:20:28.015303 kubelet[2499]: I1008 20:20:28.015046 2499 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-1-0-6-0b75032dd1.novalocal" Oct 8 20:20:28.202072 kubelet[2499]: E1008 20:20:28.201945 2499 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-1-0-6-0b75032dd1.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-1-0-6-0b75032dd1.novalocal" Oct 8 20:20:28.385727 kubelet[2499]: I1008 20:20:28.385643 2499 apiserver.go:52] "Watching apiserver" Oct 8 20:20:28.415484 kubelet[2499]: I1008 20:20:28.415398 2499 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Oct 8 20:20:31.264869 systemd[1]: Reloading requested from client PID 2776 ('systemctl') (unit session-11.scope)... Oct 8 20:20:31.264906 systemd[1]: Reloading... Oct 8 20:20:31.371000 zram_generator::config[2816]: No configuration found. Oct 8 20:20:31.523606 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 20:20:31.609411 systemd[1]: Reloading finished in 343 ms. Oct 8 20:20:31.647611 kubelet[2499]: I1008 20:20:31.647524 2499 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 8 20:20:31.648122 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:20:31.657294 systemd[1]: kubelet.service: Deactivated successfully. Oct 8 20:20:31.658371 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:20:31.663209 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:20:32.009803 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:20:32.026637 (kubelet)[2889]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 8 20:20:32.258015 kubelet[2889]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 20:20:32.258015 kubelet[2889]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 8 20:20:32.258015 kubelet[2889]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 20:20:32.258541 kubelet[2889]: I1008 20:20:32.258058 2889 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 8 20:20:32.264216 kubelet[2889]: I1008 20:20:32.263911 2889 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Oct 8 20:20:32.264216 kubelet[2889]: I1008 20:20:32.263968 2889 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 8 20:20:32.264704 kubelet[2889]: I1008 20:20:32.264271 2889 server.go:919] "Client rotation is on, will bootstrap in background" Oct 8 20:20:32.266189 kubelet[2889]: I1008 20:20:32.265878 2889 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 8 20:20:32.288786 kubelet[2889]: I1008 20:20:32.288518 2889 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 8 20:20:32.314677 kubelet[2889]: I1008 20:20:32.314649 2889 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 8 20:20:32.315847 kubelet[2889]: I1008 20:20:32.315548 2889 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 8 20:20:32.315847 kubelet[2889]: I1008 20:20:32.315788 2889 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 8 20:20:32.316057 kubelet[2889]: I1008 20:20:32.316043 2889 topology_manager.go:138] "Creating topology manager with none policy" Oct 8 20:20:32.316274 kubelet[2889]: I1008 20:20:32.316136 2889 container_manager_linux.go:301] "Creating device plugin manager" Oct 8 20:20:32.316274 kubelet[2889]: I1008 20:20:32.316185 2889 state_mem.go:36] "Initialized new in-memory state store" Oct 8 20:20:32.316394 kubelet[2889]: I1008 20:20:32.316382 2889 kubelet.go:396] "Attempting to sync node with API server" Oct 8 20:20:32.316990 kubelet[2889]: I1008 20:20:32.316933 2889 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 8 20:20:32.317088 kubelet[2889]: I1008 20:20:32.317077 2889 kubelet.go:312] "Adding apiserver pod source" Oct 8 20:20:32.317167 kubelet[2889]: I1008 20:20:32.317157 2889 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 8 20:20:32.320759 kubelet[2889]: I1008 20:20:32.320723 2889 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Oct 8 20:20:32.321152 kubelet[2889]: I1008 20:20:32.321136 2889 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 8 20:20:32.321784 kubelet[2889]: I1008 20:20:32.321769 2889 server.go:1256] "Started kubelet" Oct 8 20:20:32.331833 kubelet[2889]: I1008 20:20:32.331646 2889 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 8 20:20:32.340307 kubelet[2889]: I1008 20:20:32.338852 2889 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Oct 8 20:20:32.340307 kubelet[2889]: I1008 20:20:32.339671 2889 server.go:461] "Adding debug handlers to kubelet server" Oct 8 20:20:32.345663 kubelet[2889]: I1008 20:20:32.345637 2889 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 8 20:20:32.346021 kubelet[2889]: I1008 20:20:32.345992 2889 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 8 20:20:32.348418 kubelet[2889]: I1008 20:20:32.347572 2889 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 8 20:20:32.392775 kubelet[2889]: I1008 20:20:32.392536 2889 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 8 20:20:32.397312 kubelet[2889]: I1008 20:20:32.397287 2889 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 8 20:20:32.397476 kubelet[2889]: I1008 20:20:32.397461 2889 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 8 20:20:32.397564 kubelet[2889]: I1008 20:20:32.397554 2889 kubelet.go:2329] "Starting kubelet main sync loop" Oct 8 20:20:32.397682 kubelet[2889]: E1008 20:20:32.397671 2889 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 8 20:20:32.399501 kubelet[2889]: I1008 20:20:32.348565 2889 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Oct 8 20:20:32.401005 kubelet[2889]: I1008 20:20:32.400355 2889 reconciler_new.go:29] "Reconciler: start to sync state" Oct 8 20:20:32.406998 kubelet[2889]: I1008 20:20:32.404582 2889 factory.go:221] Registration of the systemd container factory successfully Oct 8 20:20:32.406998 kubelet[2889]: I1008 20:20:32.404704 2889 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 8 20:20:32.425020 kubelet[2889]: I1008 20:20:32.423562 2889 factory.go:221] Registration of the containerd container factory successfully Oct 8 20:20:32.428641 kubelet[2889]: E1008 20:20:32.428358 2889 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 8 20:20:32.454741 kubelet[2889]: I1008 20:20:32.454033 2889 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-1-0-6-0b75032dd1.novalocal" Oct 8 20:20:32.467390 kubelet[2889]: I1008 20:20:32.467357 2889 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081-1-0-6-0b75032dd1.novalocal" Oct 8 20:20:32.470309 kubelet[2889]: I1008 20:20:32.470268 2889 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-1-0-6-0b75032dd1.novalocal" Oct 8 20:20:32.499253 kubelet[2889]: E1008 20:20:32.499144 2889 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 8 20:20:32.520121 kubelet[2889]: I1008 20:20:32.519840 2889 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 8 20:20:32.520121 kubelet[2889]: I1008 20:20:32.519862 2889 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 8 20:20:32.520121 kubelet[2889]: I1008 20:20:32.519879 2889 state_mem.go:36] "Initialized new in-memory state store" Oct 8 20:20:32.521841 kubelet[2889]: I1008 20:20:32.520197 2889 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 8 20:20:32.521841 kubelet[2889]: I1008 20:20:32.520223 2889 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 8 20:20:32.521841 kubelet[2889]: I1008 20:20:32.520231 2889 policy_none.go:49] "None policy: Start" Oct 8 20:20:32.523181 kubelet[2889]: I1008 20:20:32.522463 2889 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 8 20:20:32.523181 kubelet[2889]: I1008 20:20:32.522504 2889 state_mem.go:35] "Initializing new in-memory state store" Oct 8 20:20:32.523181 kubelet[2889]: I1008 20:20:32.522713 2889 state_mem.go:75] "Updated machine memory state" Oct 8 20:20:32.526548 kubelet[2889]: I1008 20:20:32.524625 2889 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 8 20:20:32.526548 kubelet[2889]: I1008 20:20:32.525705 2889 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 8 20:20:32.699809 kubelet[2889]: I1008 20:20:32.699771 2889 topology_manager.go:215] "Topology Admit Handler" podUID="dcc5af4539f1db18133e2e3492a4ee3e" podNamespace="kube-system" podName="kube-apiserver-ci-4081-1-0-6-0b75032dd1.novalocal" Oct 8 20:20:32.700972 kubelet[2889]: I1008 20:20:32.700104 2889 topology_manager.go:215] "Topology Admit Handler" podUID="8744de58934a4363cb718d9c37226056" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-1-0-6-0b75032dd1.novalocal" Oct 8 20:20:32.701174 kubelet[2889]: I1008 20:20:32.701162 2889 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8744de58934a4363cb718d9c37226056-kubeconfig\") pod \"kube-controller-manager-ci-4081-1-0-6-0b75032dd1.novalocal\" (UID: \"8744de58934a4363cb718d9c37226056\") " pod="kube-system/kube-controller-manager-ci-4081-1-0-6-0b75032dd1.novalocal" Oct 8 20:20:32.701269 kubelet[2889]: I1008 20:20:32.701259 2889 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8744de58934a4363cb718d9c37226056-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-1-0-6-0b75032dd1.novalocal\" (UID: \"8744de58934a4363cb718d9c37226056\") " pod="kube-system/kube-controller-manager-ci-4081-1-0-6-0b75032dd1.novalocal" Oct 8 20:20:32.701366 kubelet[2889]: I1008 20:20:32.701354 2889 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dcc5af4539f1db18133e2e3492a4ee3e-ca-certs\") pod \"kube-apiserver-ci-4081-1-0-6-0b75032dd1.novalocal\" (UID: \"dcc5af4539f1db18133e2e3492a4ee3e\") " pod="kube-system/kube-apiserver-ci-4081-1-0-6-0b75032dd1.novalocal" Oct 8 20:20:32.703021 kubelet[2889]: I1008 20:20:32.702998 2889 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dcc5af4539f1db18133e2e3492a4ee3e-k8s-certs\") pod \"kube-apiserver-ci-4081-1-0-6-0b75032dd1.novalocal\" (UID: \"dcc5af4539f1db18133e2e3492a4ee3e\") " pod="kube-system/kube-apiserver-ci-4081-1-0-6-0b75032dd1.novalocal" Oct 8 20:20:32.703187 kubelet[2889]: I1008 20:20:32.701455 2889 topology_manager.go:215] "Topology Admit Handler" podUID="328ff33da7610b6dc4d7888847bcef7b" podNamespace="kube-system" podName="kube-scheduler-ci-4081-1-0-6-0b75032dd1.novalocal" Oct 8 20:20:32.703299 kubelet[2889]: I1008 20:20:32.703168 2889 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dcc5af4539f1db18133e2e3492a4ee3e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-1-0-6-0b75032dd1.novalocal\" (UID: \"dcc5af4539f1db18133e2e3492a4ee3e\") " pod="kube-system/kube-apiserver-ci-4081-1-0-6-0b75032dd1.novalocal" Oct 8 20:20:32.703412 kubelet[2889]: I1008 20:20:32.703398 2889 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8744de58934a4363cb718d9c37226056-ca-certs\") pod \"kube-controller-manager-ci-4081-1-0-6-0b75032dd1.novalocal\" (UID: \"8744de58934a4363cb718d9c37226056\") " pod="kube-system/kube-controller-manager-ci-4081-1-0-6-0b75032dd1.novalocal" Oct 8 20:20:32.703527 kubelet[2889]: I1008 20:20:32.703515 2889 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8744de58934a4363cb718d9c37226056-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-1-0-6-0b75032dd1.novalocal\" (UID: \"8744de58934a4363cb718d9c37226056\") " pod="kube-system/kube-controller-manager-ci-4081-1-0-6-0b75032dd1.novalocal" Oct 8 20:20:32.704024 kubelet[2889]: I1008 20:20:32.704011 2889 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8744de58934a4363cb718d9c37226056-k8s-certs\") pod \"kube-controller-manager-ci-4081-1-0-6-0b75032dd1.novalocal\" (UID: \"8744de58934a4363cb718d9c37226056\") " pod="kube-system/kube-controller-manager-ci-4081-1-0-6-0b75032dd1.novalocal" Oct 8 20:20:32.707504 kubelet[2889]: W1008 20:20:32.707469 2889 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 8 20:20:32.712988 kubelet[2889]: W1008 20:20:32.711121 2889 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 8 20:20:32.718840 kubelet[2889]: W1008 20:20:32.718808 2889 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 8 20:20:32.806420 kubelet[2889]: I1008 20:20:32.805355 2889 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/328ff33da7610b6dc4d7888847bcef7b-kubeconfig\") pod \"kube-scheduler-ci-4081-1-0-6-0b75032dd1.novalocal\" (UID: \"328ff33da7610b6dc4d7888847bcef7b\") " pod="kube-system/kube-scheduler-ci-4081-1-0-6-0b75032dd1.novalocal" Oct 8 20:20:33.334132 kubelet[2889]: I1008 20:20:33.334066 2889 apiserver.go:52] "Watching apiserver" Oct 8 20:20:33.400935 kubelet[2889]: I1008 20:20:33.400842 2889 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Oct 8 20:20:33.482988 kubelet[2889]: W1008 20:20:33.482322 2889 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 8 20:20:33.482988 kubelet[2889]: E1008 20:20:33.482478 2889 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081-1-0-6-0b75032dd1.novalocal\" already exists" pod="kube-system/kube-controller-manager-ci-4081-1-0-6-0b75032dd1.novalocal" Oct 8 20:20:33.530722 kubelet[2889]: I1008 20:20:33.530422 2889 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-1-0-6-0b75032dd1.novalocal" podStartSLOduration=1.5303543080000002 podStartE2EDuration="1.530354308s" podCreationTimestamp="2024-10-08 20:20:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 20:20:33.520599809 +0000 UTC m=+1.464549801" watchObservedRunningTime="2024-10-08 20:20:33.530354308 +0000 UTC m=+1.474304301" Oct 8 20:20:33.541935 kubelet[2889]: I1008 20:20:33.541893 2889 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-1-0-6-0b75032dd1.novalocal" podStartSLOduration=1.541847439 podStartE2EDuration="1.541847439s" podCreationTimestamp="2024-10-08 20:20:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 20:20:33.531492704 +0000 UTC m=+1.475442686" watchObservedRunningTime="2024-10-08 20:20:33.541847439 +0000 UTC m=+1.485797421" Oct 8 20:20:35.032942 kubelet[2889]: I1008 20:20:35.032895 2889 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-1-0-6-0b75032dd1.novalocal" podStartSLOduration=3.03285182 podStartE2EDuration="3.03285182s" podCreationTimestamp="2024-10-08 20:20:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 20:20:33.54267982 +0000 UTC m=+1.486629802" watchObservedRunningTime="2024-10-08 20:20:35.03285182 +0000 UTC m=+2.976801802" Oct 8 20:20:38.156906 sudo[1889]: pam_unix(sudo:session): session closed for user root Oct 8 20:20:38.329357 sshd[1882]: pam_unix(sshd:session): session closed for user core Oct 8 20:20:38.336426 systemd[1]: sshd@8-172.24.4.55:22-172.24.4.1:55086.service: Deactivated successfully. Oct 8 20:20:38.340083 systemd-logind[1572]: Session 11 logged out. Waiting for processes to exit. Oct 8 20:20:38.340232 systemd[1]: session-11.scope: Deactivated successfully. Oct 8 20:20:38.342241 systemd-logind[1572]: Removed session 11. Oct 8 20:20:45.242502 kubelet[2889]: I1008 20:20:45.242419 2889 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 8 20:20:45.244227 containerd[1597]: time="2024-10-08T20:20:45.244147509Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 8 20:20:45.245152 kubelet[2889]: I1008 20:20:45.244649 2889 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 8 20:20:46.195498 kubelet[2889]: I1008 20:20:46.195427 2889 topology_manager.go:215] "Topology Admit Handler" podUID="b3aff3c0-a27b-466e-9713-89f95c873fa4" podNamespace="kube-system" podName="kube-proxy-brkd6" Oct 8 20:20:46.292240 kubelet[2889]: I1008 20:20:46.292195 2889 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b3aff3c0-a27b-466e-9713-89f95c873fa4-xtables-lock\") pod \"kube-proxy-brkd6\" (UID: \"b3aff3c0-a27b-466e-9713-89f95c873fa4\") " pod="kube-system/kube-proxy-brkd6" Oct 8 20:20:46.292673 kubelet[2889]: I1008 20:20:46.292255 2889 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b3aff3c0-a27b-466e-9713-89f95c873fa4-kube-proxy\") pod \"kube-proxy-brkd6\" (UID: \"b3aff3c0-a27b-466e-9713-89f95c873fa4\") " pod="kube-system/kube-proxy-brkd6" Oct 8 20:20:46.292673 kubelet[2889]: I1008 20:20:46.292285 2889 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9lk7\" (UniqueName: \"kubernetes.io/projected/b3aff3c0-a27b-466e-9713-89f95c873fa4-kube-api-access-w9lk7\") pod \"kube-proxy-brkd6\" (UID: \"b3aff3c0-a27b-466e-9713-89f95c873fa4\") " pod="kube-system/kube-proxy-brkd6" Oct 8 20:20:46.292673 kubelet[2889]: I1008 20:20:46.292309 2889 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b3aff3c0-a27b-466e-9713-89f95c873fa4-lib-modules\") pod \"kube-proxy-brkd6\" (UID: \"b3aff3c0-a27b-466e-9713-89f95c873fa4\") " pod="kube-system/kube-proxy-brkd6" Oct 8 20:20:46.487060 kubelet[2889]: I1008 20:20:46.486364 2889 topology_manager.go:215] "Topology Admit Handler" podUID="1a12a490-ce9e-4786-b5e8-b0c82d709aa4" podNamespace="tigera-operator" podName="tigera-operator-5d56685c77-rk2sj" Oct 8 20:20:46.493813 kubelet[2889]: I1008 20:20:46.493615 2889 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1a12a490-ce9e-4786-b5e8-b0c82d709aa4-var-lib-calico\") pod \"tigera-operator-5d56685c77-rk2sj\" (UID: \"1a12a490-ce9e-4786-b5e8-b0c82d709aa4\") " pod="tigera-operator/tigera-operator-5d56685c77-rk2sj" Oct 8 20:20:46.493813 kubelet[2889]: I1008 20:20:46.493667 2889 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tfb4\" (UniqueName: \"kubernetes.io/projected/1a12a490-ce9e-4786-b5e8-b0c82d709aa4-kube-api-access-9tfb4\") pod \"tigera-operator-5d56685c77-rk2sj\" (UID: \"1a12a490-ce9e-4786-b5e8-b0c82d709aa4\") " pod="tigera-operator/tigera-operator-5d56685c77-rk2sj" Oct 8 20:20:46.514096 containerd[1597]: time="2024-10-08T20:20:46.514031881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-brkd6,Uid:b3aff3c0-a27b-466e-9713-89f95c873fa4,Namespace:kube-system,Attempt:0,}" Oct 8 20:20:46.543455 containerd[1597]: time="2024-10-08T20:20:46.542874860Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:20:46.543455 containerd[1597]: time="2024-10-08T20:20:46.543053284Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:20:46.544187 containerd[1597]: time="2024-10-08T20:20:46.543883632Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:20:46.544187 containerd[1597]: time="2024-10-08T20:20:46.544101540Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:20:46.585553 containerd[1597]: time="2024-10-08T20:20:46.585503662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-brkd6,Uid:b3aff3c0-a27b-466e-9713-89f95c873fa4,Namespace:kube-system,Attempt:0,} returns sandbox id \"dcc844d8934f98e4bb1c96a423eba423eb869156ab048b709eeb188303a0596b\"" Oct 8 20:20:46.595461 containerd[1597]: time="2024-10-08T20:20:46.595338245Z" level=info msg="CreateContainer within sandbox \"dcc844d8934f98e4bb1c96a423eba423eb869156ab048b709eeb188303a0596b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 8 20:20:46.628098 containerd[1597]: time="2024-10-08T20:20:46.628042075Z" level=info msg="CreateContainer within sandbox \"dcc844d8934f98e4bb1c96a423eba423eb869156ab048b709eeb188303a0596b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8003ea611cc388ead25fe185be3464eb1a447d0ddf05ae531076e55f9d9625a7\"" Oct 8 20:20:46.629492 containerd[1597]: time="2024-10-08T20:20:46.629439927Z" level=info msg="StartContainer for \"8003ea611cc388ead25fe185be3464eb1a447d0ddf05ae531076e55f9d9625a7\"" Oct 8 20:20:46.695785 containerd[1597]: time="2024-10-08T20:20:46.695685147Z" level=info msg="StartContainer for \"8003ea611cc388ead25fe185be3464eb1a447d0ddf05ae531076e55f9d9625a7\" returns successfully" Oct 8 20:20:46.796931 containerd[1597]: time="2024-10-08T20:20:46.796666657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-rk2sj,Uid:1a12a490-ce9e-4786-b5e8-b0c82d709aa4,Namespace:tigera-operator,Attempt:0,}" Oct 8 20:20:46.864551 containerd[1597]: time="2024-10-08T20:20:46.864369621Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:20:46.864551 containerd[1597]: time="2024-10-08T20:20:46.864484306Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:20:46.864735 containerd[1597]: time="2024-10-08T20:20:46.864555139Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:20:46.864818 containerd[1597]: time="2024-10-08T20:20:46.864750776Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:20:46.971102 containerd[1597]: time="2024-10-08T20:20:46.970925407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-rk2sj,Uid:1a12a490-ce9e-4786-b5e8-b0c82d709aa4,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"b23fe3a9361ea38a2722fcece1f49904cf4c47c0e37e18bd51dd3a42bf20a7ec\"" Oct 8 20:20:46.980198 containerd[1597]: time="2024-10-08T20:20:46.980137211Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\"" Oct 8 20:20:48.741283 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2098594270.mount: Deactivated successfully. Oct 8 20:20:49.421534 containerd[1597]: time="2024-10-08T20:20:49.421478686Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:20:49.422860 containerd[1597]: time="2024-10-08T20:20:49.422822705Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.3: active requests=0, bytes read=22136517" Oct 8 20:20:49.424525 containerd[1597]: time="2024-10-08T20:20:49.424479653Z" level=info msg="ImageCreate event name:\"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:20:49.430602 containerd[1597]: time="2024-10-08T20:20:49.430555687Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:20:49.431483 containerd[1597]: time="2024-10-08T20:20:49.431369854Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.3\" with image id \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\", repo tag \"quay.io/tigera/operator:v1.34.3\", repo digest \"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\", size \"22130728\" in 2.451185214s" Oct 8 20:20:49.431483 containerd[1597]: time="2024-10-08T20:20:49.431399410Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\" returns image reference \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\"" Oct 8 20:20:49.433458 containerd[1597]: time="2024-10-08T20:20:49.433420681Z" level=info msg="CreateContainer within sandbox \"b23fe3a9361ea38a2722fcece1f49904cf4c47c0e37e18bd51dd3a42bf20a7ec\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 8 20:20:49.468016 containerd[1597]: time="2024-10-08T20:20:49.467944935Z" level=info msg="CreateContainer within sandbox \"b23fe3a9361ea38a2722fcece1f49904cf4c47c0e37e18bd51dd3a42bf20a7ec\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"a5f9cc8671e82ea89a250aa5b0f77211b5916dfb02b0fb1100d86b534125b427\"" Oct 8 20:20:49.469364 containerd[1597]: time="2024-10-08T20:20:49.468489647Z" level=info msg="StartContainer for \"a5f9cc8671e82ea89a250aa5b0f77211b5916dfb02b0fb1100d86b534125b427\"" Oct 8 20:20:49.733250 containerd[1597]: time="2024-10-08T20:20:49.733017016Z" level=info msg="StartContainer for \"a5f9cc8671e82ea89a250aa5b0f77211b5916dfb02b0fb1100d86b534125b427\" returns successfully" Oct 8 20:20:50.541750 kubelet[2889]: I1008 20:20:50.541353 2889 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-brkd6" podStartSLOduration=4.541251014 podStartE2EDuration="4.541251014s" podCreationTimestamp="2024-10-08 20:20:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 20:20:47.54194524 +0000 UTC m=+15.485895232" watchObservedRunningTime="2024-10-08 20:20:50.541251014 +0000 UTC m=+18.485201046" Oct 8 20:20:50.544668 kubelet[2889]: I1008 20:20:50.541577 2889 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5d56685c77-rk2sj" podStartSLOduration=2.085597049 podStartE2EDuration="4.541519537s" podCreationTimestamp="2024-10-08 20:20:46 +0000 UTC" firstStartedPulling="2024-10-08 20:20:46.975853017 +0000 UTC m=+14.919802999" lastFinishedPulling="2024-10-08 20:20:49.431775505 +0000 UTC m=+17.375725487" observedRunningTime="2024-10-08 20:20:50.53880675 +0000 UTC m=+18.482756782" watchObservedRunningTime="2024-10-08 20:20:50.541519537 +0000 UTC m=+18.485469569" Oct 8 20:20:52.973863 kubelet[2889]: I1008 20:20:52.973809 2889 topology_manager.go:215] "Topology Admit Handler" podUID="24c3e683-d0a4-490f-94ad-b8e15da98eff" podNamespace="calico-system" podName="calico-typha-665c96b69d-mhpls" Oct 8 20:20:53.138000 kubelet[2889]: I1008 20:20:53.137936 2889 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x78nb\" (UniqueName: \"kubernetes.io/projected/24c3e683-d0a4-490f-94ad-b8e15da98eff-kube-api-access-x78nb\") pod \"calico-typha-665c96b69d-mhpls\" (UID: \"24c3e683-d0a4-490f-94ad-b8e15da98eff\") " pod="calico-system/calico-typha-665c96b69d-mhpls" Oct 8 20:20:53.138374 kubelet[2889]: I1008 20:20:53.138179 2889 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24c3e683-d0a4-490f-94ad-b8e15da98eff-tigera-ca-bundle\") pod \"calico-typha-665c96b69d-mhpls\" (UID: \"24c3e683-d0a4-490f-94ad-b8e15da98eff\") " pod="calico-system/calico-typha-665c96b69d-mhpls" Oct 8 20:20:53.138374 kubelet[2889]: I1008 20:20:53.138223 2889 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/24c3e683-d0a4-490f-94ad-b8e15da98eff-typha-certs\") pod \"calico-typha-665c96b69d-mhpls\" (UID: \"24c3e683-d0a4-490f-94ad-b8e15da98eff\") " pod="calico-system/calico-typha-665c96b69d-mhpls" Oct 8 20:20:53.171100 kubelet[2889]: I1008 20:20:53.171065 2889 topology_manager.go:215] "Topology Admit Handler" podUID="96f4b535-e727-427c-be1e-2b6976c96505" podNamespace="calico-system" podName="calico-node-vswfv" Oct 8 20:20:53.241931 kubelet[2889]: I1008 20:20:53.239350 2889 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/96f4b535-e727-427c-be1e-2b6976c96505-lib-modules\") pod \"calico-node-vswfv\" (UID: \"96f4b535-e727-427c-be1e-2b6976c96505\") " pod="calico-system/calico-node-vswfv" Oct 8 20:20:53.241931 kubelet[2889]: I1008 20:20:53.239392 2889 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/96f4b535-e727-427c-be1e-2b6976c96505-var-run-calico\") pod \"calico-node-vswfv\" (UID: \"96f4b535-e727-427c-be1e-2b6976c96505\") " pod="calico-system/calico-node-vswfv" Oct 8 20:20:53.241931 kubelet[2889]: I1008 20:20:53.239433 2889 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/96f4b535-e727-427c-be1e-2b6976c96505-var-lib-calico\") pod \"calico-node-vswfv\" (UID: \"96f4b535-e727-427c-be1e-2b6976c96505\") " pod="calico-system/calico-node-vswfv" Oct 8 20:20:53.241931 kubelet[2889]: I1008 20:20:53.239457 2889 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/96f4b535-e727-427c-be1e-2b6976c96505-cni-net-dir\") pod \"calico-node-vswfv\" (UID: \"96f4b535-e727-427c-be1e-2b6976c96505\") " pod="calico-system/calico-node-vswfv" Oct 8 20:20:53.241931 kubelet[2889]: I1008 20:20:53.239479 2889 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/96f4b535-e727-427c-be1e-2b6976c96505-node-certs\") pod \"calico-node-vswfv\" (UID: \"96f4b535-e727-427c-be1e-2b6976c96505\") " pod="calico-system/calico-node-vswfv" Oct 8 20:20:53.242163 kubelet[2889]: I1008 20:20:53.239503 2889 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/96f4b535-e727-427c-be1e-2b6976c96505-cni-log-dir\") pod \"calico-node-vswfv\" (UID: \"96f4b535-e727-427c-be1e-2b6976c96505\") " pod="calico-system/calico-node-vswfv" Oct 8 20:20:53.242163 kubelet[2889]: I1008 20:20:53.239527 2889 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/96f4b535-e727-427c-be1e-2b6976c96505-flexvol-driver-host\") pod \"calico-node-vswfv\" (UID: \"96f4b535-e727-427c-be1e-2b6976c96505\") " pod="calico-system/calico-node-vswfv" Oct 8 20:20:53.242163 kubelet[2889]: I1008 20:20:53.239575 2889 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/96f4b535-e727-427c-be1e-2b6976c96505-xtables-lock\") pod \"calico-node-vswfv\" (UID: \"96f4b535-e727-427c-be1e-2b6976c96505\") " pod="calico-system/calico-node-vswfv" Oct 8 20:20:53.242163 kubelet[2889]: I1008 20:20:53.239599 2889 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/96f4b535-e727-427c-be1e-2b6976c96505-policysync\") pod \"calico-node-vswfv\" (UID: \"96f4b535-e727-427c-be1e-2b6976c96505\") " pod="calico-system/calico-node-vswfv" Oct 8 20:20:53.242163 kubelet[2889]: I1008 20:20:53.239636 2889 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/96f4b535-e727-427c-be1e-2b6976c96505-cni-bin-dir\") pod \"calico-node-vswfv\" (UID: \"96f4b535-e727-427c-be1e-2b6976c96505\") " pod="calico-system/calico-node-vswfv" Oct 8 20:20:53.242337 kubelet[2889]: I1008 20:20:53.239673 2889 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/96f4b535-e727-427c-be1e-2b6976c96505-tigera-ca-bundle\") pod \"calico-node-vswfv\" (UID: \"96f4b535-e727-427c-be1e-2b6976c96505\") " pod="calico-system/calico-node-vswfv" Oct 8 20:20:53.242337 kubelet[2889]: I1008 20:20:53.239701 2889 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdcxm\" (UniqueName: \"kubernetes.io/projected/96f4b535-e727-427c-be1e-2b6976c96505-kube-api-access-zdcxm\") pod \"calico-node-vswfv\" (UID: \"96f4b535-e727-427c-be1e-2b6976c96505\") " pod="calico-system/calico-node-vswfv" Oct 8 20:20:53.287993 containerd[1597]: time="2024-10-08T20:20:53.287934834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-665c96b69d-mhpls,Uid:24c3e683-d0a4-490f-94ad-b8e15da98eff,Namespace:calico-system,Attempt:0,}" Oct 8 20:20:53.298544 kubelet[2889]: I1008 20:20:53.298483 2889 topology_manager.go:215] "Topology Admit Handler" podUID="a5649738-e837-4158-bf1c-576a5e896847" podNamespace="calico-system" podName="csi-node-driver-cbc7k" Oct 8 20:20:53.300286 kubelet[2889]: E1008 20:20:53.298929 2889 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cbc7k" podUID="a5649738-e837-4158-bf1c-576a5e896847" Oct 8 20:20:53.341543 kubelet[2889]: I1008 20:20:53.341502 2889 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a5649738-e837-4158-bf1c-576a5e896847-kubelet-dir\") pod \"csi-node-driver-cbc7k\" (UID: \"a5649738-e837-4158-bf1c-576a5e896847\") " pod="calico-system/csi-node-driver-cbc7k" Oct 8 20:20:53.341543 kubelet[2889]: I1008 20:20:53.341552 2889 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a5649738-e837-4158-bf1c-576a5e896847-socket-dir\") pod \"csi-node-driver-cbc7k\" (UID: \"a5649738-e837-4158-bf1c-576a5e896847\") " pod="calico-system/csi-node-driver-cbc7k" Oct 8 20:20:53.341738 kubelet[2889]: I1008 20:20:53.341581 2889 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a5649738-e837-4158-bf1c-576a5e896847-registration-dir\") pod \"csi-node-driver-cbc7k\" (UID: \"a5649738-e837-4158-bf1c-576a5e896847\") " pod="calico-system/csi-node-driver-cbc7k" Oct 8 20:20:53.341738 kubelet[2889]: I1008 20:20:53.341629 2889 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/a5649738-e837-4158-bf1c-576a5e896847-varrun\") pod \"csi-node-driver-cbc7k\" (UID: \"a5649738-e837-4158-bf1c-576a5e896847\") " pod="calico-system/csi-node-driver-cbc7k" Oct 8 20:20:53.345992 kubelet[2889]: I1008 20:20:53.345345 2889 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6xxj\" (UniqueName: \"kubernetes.io/projected/a5649738-e837-4158-bf1c-576a5e896847-kube-api-access-b6xxj\") pod \"csi-node-driver-cbc7k\" (UID: \"a5649738-e837-4158-bf1c-576a5e896847\") " pod="calico-system/csi-node-driver-cbc7k" Oct 8 20:20:53.368728 kubelet[2889]: E1008 20:20:53.367302 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:20:53.368728 kubelet[2889]: W1008 20:20:53.367339 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:20:53.368728 kubelet[2889]: E1008 20:20:53.367364 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:20:53.369477 kubelet[2889]: E1008 20:20:53.368743 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:20:53.369477 kubelet[2889]: W1008 20:20:53.368753 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:20:53.369477 kubelet[2889]: E1008 20:20:53.368768 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:20:53.374456 containerd[1597]: time="2024-10-08T20:20:53.365569930Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:20:53.374456 containerd[1597]: time="2024-10-08T20:20:53.365654839Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:20:53.374456 containerd[1597]: time="2024-10-08T20:20:53.365674295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:20:53.374456 containerd[1597]: time="2024-10-08T20:20:53.365766308Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:20:53.391335 kubelet[2889]: E1008 20:20:53.391297 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:20:53.391476 kubelet[2889]: W1008 20:20:53.391461 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:20:53.391602 kubelet[2889]: E1008 20:20:53.391537 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:20:53.413996 kubelet[2889]: E1008 20:20:53.412515 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:20:53.413996 kubelet[2889]: W1008 20:20:53.412540 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:20:53.413996 kubelet[2889]: E1008 20:20:53.412563 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:20:53.447502 kubelet[2889]: E1008 20:20:53.447474 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:20:53.447502 kubelet[2889]: W1008 20:20:53.447495 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:20:53.447794 kubelet[2889]: E1008 20:20:53.447519 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:20:53.447794 kubelet[2889]: E1008 20:20:53.447698 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:20:53.447794 kubelet[2889]: W1008 20:20:53.447707 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:20:53.447794 kubelet[2889]: E1008 20:20:53.447722 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:20:53.447947 kubelet[2889]: E1008 20:20:53.447865 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:20:53.447947 kubelet[2889]: W1008 20:20:53.447873 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:20:53.447947 kubelet[2889]: E1008 20:20:53.447885 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:20:53.448482 kubelet[2889]: E1008 20:20:53.448039 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:20:53.448482 kubelet[2889]: W1008 20:20:53.448047 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:20:53.448482 kubelet[2889]: E1008 20:20:53.448060 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:20:53.449309 kubelet[2889]: E1008 20:20:53.449016 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:20:53.449309 kubelet[2889]: W1008 20:20:53.449031 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:20:53.449309 kubelet[2889]: E1008 20:20:53.449052 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:20:53.449620 kubelet[2889]: E1008 20:20:53.449553 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:20:53.449620 kubelet[2889]: W1008 20:20:53.449567 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:20:53.449620 kubelet[2889]: E1008 20:20:53.449580 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:20:53.449920 kubelet[2889]: E1008 20:20:53.449734 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:20:53.449920 kubelet[2889]: W1008 20:20:53.449742 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:20:53.449920 kubelet[2889]: E1008 20:20:53.449821 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:20:53.451737 kubelet[2889]: E1008 20:20:53.451720 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:20:53.451737 kubelet[2889]: W1008 20:20:53.451734 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:20:53.451893 kubelet[2889]: E1008 20:20:53.451833 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:20:53.452387 kubelet[2889]: E1008 20:20:53.452369 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:20:53.452387 kubelet[2889]: W1008 20:20:53.452383 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:20:53.452568 kubelet[2889]: E1008 20:20:53.452498 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:20:53.453635 kubelet[2889]: E1008 20:20:53.453614 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:20:53.453635 kubelet[2889]: W1008 20:20:53.453628 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:20:53.453766 kubelet[2889]: E1008 20:20:53.453728 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:20:53.454471 kubelet[2889]: E1008 20:20:53.454453 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:20:53.454471 kubelet[2889]: W1008 20:20:53.454467 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:20:53.454598 kubelet[2889]: E1008 20:20:53.454578 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:20:53.455287 kubelet[2889]: E1008 20:20:53.455269 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:20:53.455287 kubelet[2889]: W1008 20:20:53.455283 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:20:53.455680 kubelet[2889]: E1008 20:20:53.455634 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:20:53.456142 kubelet[2889]: E1008 20:20:53.456124 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:20:53.456666 kubelet[2889]: W1008 20:20:53.456633 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:20:53.456950 kubelet[2889]: E1008 20:20:53.456732 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:20:53.457779 kubelet[2889]: E1008 20:20:53.457761 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:20:53.457779 kubelet[2889]: W1008 20:20:53.457775 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:20:53.458768 kubelet[2889]: E1008 20:20:53.457876 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:20:53.458999 kubelet[2889]: E1008 20:20:53.458948 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:20:53.458999 kubelet[2889]: W1008 20:20:53.458988 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:20:53.459172 kubelet[2889]: E1008 20:20:53.459088 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:20:53.459691 kubelet[2889]: E1008 20:20:53.459657 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:20:53.459737 kubelet[2889]: W1008 20:20:53.459706 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:20:53.460509 kubelet[2889]: E1008 20:20:53.460490 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:20:53.460509 kubelet[2889]: W1008 20:20:53.460505 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:20:53.461739 kubelet[2889]: E1008 20:20:53.461669 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:20:53.461739 kubelet[2889]: E1008 20:20:53.461699 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:20:53.461827 kubelet[2889]: E1008 20:20:53.461759 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:20:53.461827 kubelet[2889]: W1008 20:20:53.461767 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:20:53.462508 kubelet[2889]: E1008 20:20:53.461943 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:20:53.462508 kubelet[2889]: W1008 20:20:53.461992 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:20:53.462508 kubelet[2889]: E1008 20:20:53.462191 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:20:53.462508 kubelet[2889]: W1008 20:20:53.462201 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:20:53.463472 kubelet[2889]: E1008 20:20:53.462369 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:20:53.463472 kubelet[2889]: E1008 20:20:53.462720 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:20:53.463472 kubelet[2889]: W1008 20:20:53.463017 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:20:53.463472 kubelet[2889]: E1008 20:20:53.463043 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:20:53.463472 kubelet[2889]: E1008 20:20:53.463315 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:20:53.463472 kubelet[2889]: W1008 20:20:53.463325 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:20:53.463472 kubelet[2889]: E1008 20:20:53.463337 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:20:53.463472 kubelet[2889]: E1008 20:20:53.462734 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:20:53.464996 kubelet[2889]: E1008 20:20:53.463785 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:20:53.464996 kubelet[2889]: W1008 20:20:53.463795 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:20:53.464996 kubelet[2889]: E1008 20:20:53.463808 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:20:53.464996 kubelet[2889]: E1008 20:20:53.463988 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:20:53.464996 kubelet[2889]: W1008 20:20:53.463997 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:20:53.464996 kubelet[2889]: E1008 20:20:53.464008 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:20:53.464996 kubelet[2889]: E1008 20:20:53.462743 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:20:53.465758 kubelet[2889]: E1008 20:20:53.465494 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:20:53.465758 kubelet[2889]: W1008 20:20:53.465513 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:20:53.465758 kubelet[2889]: E1008 20:20:53.465528 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:20:53.474467 kubelet[2889]: E1008 20:20:53.473912 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:20:53.474467 kubelet[2889]: W1008 20:20:53.473932 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:20:53.474467 kubelet[2889]: E1008 20:20:53.473950 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:20:53.477625 containerd[1597]: time="2024-10-08T20:20:53.476997107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vswfv,Uid:96f4b535-e727-427c-be1e-2b6976c96505,Namespace:calico-system,Attempt:0,}" Oct 8 20:20:53.495307 containerd[1597]: time="2024-10-08T20:20:53.495060363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-665c96b69d-mhpls,Uid:24c3e683-d0a4-490f-94ad-b8e15da98eff,Namespace:calico-system,Attempt:0,} returns sandbox id \"b40461e4d1d791461ce1c1d56aa83c6b6b61ef16eee5a76062240d38d3713157\"" Oct 8 20:20:53.503263 containerd[1597]: time="2024-10-08T20:20:53.502607006Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\"" Oct 8 20:20:53.526092 containerd[1597]: time="2024-10-08T20:20:53.525879631Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:20:53.526306 containerd[1597]: time="2024-10-08T20:20:53.526097780Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:20:53.526306 containerd[1597]: time="2024-10-08T20:20:53.526158113Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:20:53.526672 containerd[1597]: time="2024-10-08T20:20:53.526358088Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:20:53.599214 containerd[1597]: time="2024-10-08T20:20:53.599118421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vswfv,Uid:96f4b535-e727-427c-be1e-2b6976c96505,Namespace:calico-system,Attempt:0,} returns sandbox id \"6005450acf4c948cc1a916591f25a373433835911d76d4b23315d9fa9f7ea18c\"" Oct 8 20:20:55.398273 kubelet[2889]: E1008 20:20:55.398238 2889 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cbc7k" podUID="a5649738-e837-4158-bf1c-576a5e896847" Oct 8 20:20:57.398459 kubelet[2889]: E1008 20:20:57.398381 2889 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cbc7k" podUID="a5649738-e837-4158-bf1c-576a5e896847" Oct 8 20:20:59.400043 kubelet[2889]: E1008 20:20:59.398939 2889 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cbc7k" podUID="a5649738-e837-4158-bf1c-576a5e896847" Oct 8 20:21:01.400284 kubelet[2889]: E1008 20:21:01.398382 2889 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cbc7k" podUID="a5649738-e837-4158-bf1c-576a5e896847" Oct 8 20:21:03.398774 kubelet[2889]: E1008 20:21:03.398694 2889 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cbc7k" podUID="a5649738-e837-4158-bf1c-576a5e896847" Oct 8 20:21:05.400608 kubelet[2889]: E1008 20:21:05.400531 2889 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cbc7k" podUID="a5649738-e837-4158-bf1c-576a5e896847" Oct 8 20:21:07.398454 kubelet[2889]: E1008 20:21:07.398185 2889 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cbc7k" podUID="a5649738-e837-4158-bf1c-576a5e896847" Oct 8 20:21:09.399033 kubelet[2889]: E1008 20:21:09.398699 2889 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cbc7k" podUID="a5649738-e837-4158-bf1c-576a5e896847" Oct 8 20:21:09.756185 containerd[1597]: time="2024-10-08T20:21:09.756063319Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:21:09.758121 containerd[1597]: time="2024-10-08T20:21:09.758070270Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.1: active requests=0, bytes read=29471335" Oct 8 20:21:09.759219 containerd[1597]: time="2024-10-08T20:21:09.759157790Z" level=info msg="ImageCreate event name:\"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:21:09.761567 containerd[1597]: time="2024-10-08T20:21:09.761545010Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:21:09.770224 containerd[1597]: time="2024-10-08T20:21:09.770115230Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.1\" with image id \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\", size \"30963728\" in 16.267461457s" Oct 8 20:21:09.770224 containerd[1597]: time="2024-10-08T20:21:09.770147472Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\" returns image reference \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\"" Oct 8 20:21:09.772349 containerd[1597]: time="2024-10-08T20:21:09.771630460Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\"" Oct 8 20:21:09.785885 containerd[1597]: time="2024-10-08T20:21:09.785837865Z" level=info msg="CreateContainer within sandbox \"b40461e4d1d791461ce1c1d56aa83c6b6b61ef16eee5a76062240d38d3713157\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 8 20:21:09.804029 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3437431611.mount: Deactivated successfully. Oct 8 20:21:09.817463 containerd[1597]: time="2024-10-08T20:21:09.817331657Z" level=info msg="CreateContainer within sandbox \"b40461e4d1d791461ce1c1d56aa83c6b6b61ef16eee5a76062240d38d3713157\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"e47e1c4b2f7f252bbd9a426249b25d77ff4f88bcac4fe4fff60ffb5e71b87c23\"" Oct 8 20:21:09.818351 containerd[1597]: time="2024-10-08T20:21:09.818222845Z" level=info msg="StartContainer for \"e47e1c4b2f7f252bbd9a426249b25d77ff4f88bcac4fe4fff60ffb5e71b87c23\"" Oct 8 20:21:09.925560 containerd[1597]: time="2024-10-08T20:21:09.925509912Z" level=info msg="StartContainer for \"e47e1c4b2f7f252bbd9a426249b25d77ff4f88bcac4fe4fff60ffb5e71b87c23\" returns successfully" Oct 8 20:21:10.645031 kubelet[2889]: I1008 20:21:10.644845 2889 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-665c96b69d-mhpls" podStartSLOduration=2.374897759 podStartE2EDuration="18.644574018s" podCreationTimestamp="2024-10-08 20:20:52 +0000 UTC" firstStartedPulling="2024-10-08 20:20:53.500848598 +0000 UTC m=+21.444798590" lastFinishedPulling="2024-10-08 20:21:09.770524866 +0000 UTC m=+37.714474849" observedRunningTime="2024-10-08 20:21:10.637410375 +0000 UTC m=+38.581360417" watchObservedRunningTime="2024-10-08 20:21:10.644574018 +0000 UTC m=+38.588524050" Oct 8 20:21:10.669004 kubelet[2889]: E1008 20:21:10.668879 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:21:10.669251 kubelet[2889]: W1008 20:21:10.668948 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:21:10.669251 kubelet[2889]: E1008 20:21:10.669068 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:21:10.669481 kubelet[2889]: E1008 20:21:10.669439 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:21:10.669481 kubelet[2889]: W1008 20:21:10.669476 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:21:10.669652 kubelet[2889]: E1008 20:21:10.669511 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:21:10.669865 kubelet[2889]: E1008 20:21:10.669834 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:21:10.669865 kubelet[2889]: W1008 20:21:10.669863 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:21:10.670082 kubelet[2889]: E1008 20:21:10.669892 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:21:10.670277 kubelet[2889]: E1008 20:21:10.670248 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:21:10.670277 kubelet[2889]: W1008 20:21:10.670276 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:21:10.670443 kubelet[2889]: E1008 20:21:10.670305 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:21:10.670670 kubelet[2889]: E1008 20:21:10.670640 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:21:10.670670 kubelet[2889]: W1008 20:21:10.670669 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:21:10.671694 kubelet[2889]: E1008 20:21:10.670696 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:21:10.671694 kubelet[2889]: E1008 20:21:10.671056 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:21:10.671694 kubelet[2889]: W1008 20:21:10.671076 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:21:10.671694 kubelet[2889]: E1008 20:21:10.671103 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:21:10.671694 kubelet[2889]: E1008 20:21:10.671482 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:21:10.671694 kubelet[2889]: W1008 20:21:10.671502 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:21:10.671694 kubelet[2889]: E1008 20:21:10.671529 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:21:10.672534 kubelet[2889]: E1008 20:21:10.671884 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:21:10.672534 kubelet[2889]: W1008 20:21:10.671904 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:21:10.672534 kubelet[2889]: E1008 20:21:10.671931 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:21:10.672534 kubelet[2889]: E1008 20:21:10.672318 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:21:10.672534 kubelet[2889]: W1008 20:21:10.672338 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:21:10.672534 kubelet[2889]: E1008 20:21:10.672367 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:21:10.673237 kubelet[2889]: E1008 20:21:10.672667 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:21:10.673237 kubelet[2889]: W1008 20:21:10.672687 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:21:10.673237 kubelet[2889]: E1008 20:21:10.672716 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:21:10.673237 kubelet[2889]: E1008 20:21:10.673066 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:21:10.673237 kubelet[2889]: W1008 20:21:10.673087 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:21:10.673237 kubelet[2889]: E1008 20:21:10.673113 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:21:10.673683 kubelet[2889]: E1008 20:21:10.673422 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:21:10.673683 kubelet[2889]: W1008 20:21:10.673442 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:21:10.673683 kubelet[2889]: E1008 20:21:10.673469 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:21:10.674256 kubelet[2889]: E1008 20:21:10.673777 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:21:10.674256 kubelet[2889]: W1008 20:21:10.673797 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:21:10.674256 kubelet[2889]: E1008 20:21:10.673824 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:21:10.674256 kubelet[2889]: E1008 20:21:10.674179 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:21:10.674256 kubelet[2889]: W1008 20:21:10.674198 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:21:10.674256 kubelet[2889]: E1008 20:21:10.674225 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:21:10.674734 kubelet[2889]: E1008 20:21:10.674608 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:21:10.674734 kubelet[2889]: W1008 20:21:10.674629 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:21:10.674734 kubelet[2889]: E1008 20:21:10.674658 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:21:10.678650 kubelet[2889]: E1008 20:21:10.678348 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:21:10.678650 kubelet[2889]: W1008 20:21:10.678385 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:21:10.678650 kubelet[2889]: E1008 20:21:10.678430 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:21:10.679142 kubelet[2889]: E1008 20:21:10.679089 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:21:10.679282 kubelet[2889]: W1008 20:21:10.679259 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:21:10.679625 kubelet[2889]: E1008 20:21:10.679510 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:21:10.679987 kubelet[2889]: E1008 20:21:10.679898 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:21:10.679987 kubelet[2889]: W1008 20:21:10.679936 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:21:10.680178 kubelet[2889]: E1008 20:21:10.680055 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:21:10.680413 kubelet[2889]: E1008 20:21:10.680376 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:21:10.680413 kubelet[2889]: W1008 20:21:10.680411 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:21:10.680671 kubelet[2889]: E1008 20:21:10.680446 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:21:10.680854 kubelet[2889]: E1008 20:21:10.680823 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:21:10.680854 kubelet[2889]: W1008 20:21:10.680852 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:21:10.681190 kubelet[2889]: E1008 20:21:10.681005 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:21:10.681338 kubelet[2889]: E1008 20:21:10.681307 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:21:10.681338 kubelet[2889]: W1008 20:21:10.681335 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:21:10.681543 kubelet[2889]: E1008 20:21:10.681508 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:21:10.682087 kubelet[2889]: E1008 20:21:10.682052 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:21:10.682087 kubelet[2889]: W1008 20:21:10.682083 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:21:10.682472 kubelet[2889]: E1008 20:21:10.682405 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:21:10.682725 kubelet[2889]: E1008 20:21:10.682696 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:21:10.682725 kubelet[2889]: W1008 20:21:10.682719 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:21:10.683454 kubelet[2889]: E1008 20:21:10.682795 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:21:10.683454 kubelet[2889]: E1008 20:21:10.683240 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:21:10.683454 kubelet[2889]: W1008 20:21:10.683265 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:21:10.683863 kubelet[2889]: E1008 20:21:10.683807 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:21:10.684082 kubelet[2889]: E1008 20:21:10.683906 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:21:10.684266 kubelet[2889]: W1008 20:21:10.684239 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:21:10.684643 kubelet[2889]: E1008 20:21:10.684430 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:21:10.684891 kubelet[2889]: E1008 20:21:10.684862 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:21:10.685310 kubelet[2889]: W1008 20:21:10.685039 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:21:10.685310 kubelet[2889]: E1008 20:21:10.685082 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:21:10.685643 kubelet[2889]: E1008 20:21:10.685619 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:21:10.685911 kubelet[2889]: W1008 20:21:10.685766 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:21:10.685911 kubelet[2889]: E1008 20:21:10.685829 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:21:10.686766 kubelet[2889]: E1008 20:21:10.686484 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:21:10.686766 kubelet[2889]: W1008 20:21:10.686509 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:21:10.686766 kubelet[2889]: E1008 20:21:10.686564 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:21:10.687413 kubelet[2889]: E1008 20:21:10.687280 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:21:10.687413 kubelet[2889]: W1008 20:21:10.687305 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:21:10.687413 kubelet[2889]: E1008 20:21:10.687389 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:21:10.688418 kubelet[2889]: E1008 20:21:10.688115 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:21:10.688418 kubelet[2889]: W1008 20:21:10.688143 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:21:10.688418 kubelet[2889]: E1008 20:21:10.688207 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:21:10.688930 kubelet[2889]: E1008 20:21:10.688808 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:21:10.688930 kubelet[2889]: W1008 20:21:10.688834 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:21:10.688930 kubelet[2889]: E1008 20:21:10.688885 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:21:10.689252 kubelet[2889]: E1008 20:21:10.689232 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:21:10.689324 kubelet[2889]: W1008 20:21:10.689255 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:21:10.689324 kubelet[2889]: E1008 20:21:10.689287 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:21:10.690289 kubelet[2889]: E1008 20:21:10.690255 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:21:10.690289 kubelet[2889]: W1008 20:21:10.690285 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:21:10.690481 kubelet[2889]: E1008 20:21:10.690316 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:21:11.399538 kubelet[2889]: E1008 20:21:11.399024 2889 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cbc7k" podUID="a5649738-e837-4158-bf1c-576a5e896847" Oct 8 20:21:11.605748 kubelet[2889]: I1008 20:21:11.604760 2889 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 8 20:21:11.682592 kubelet[2889]: E1008 20:21:11.682188 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:21:11.682592 kubelet[2889]: W1008 20:21:11.682225 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:21:11.682592 kubelet[2889]: E1008 20:21:11.682247 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:21:11.683477 kubelet[2889]: E1008 20:21:11.682794 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:21:11.683477 kubelet[2889]: W1008 20:21:11.682833 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:21:11.683477 kubelet[2889]: E1008 20:21:11.682848 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:21:11.683477 kubelet[2889]: E1008 20:21:11.683163 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:21:11.683477 kubelet[2889]: W1008 20:21:11.683174 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:21:11.683477 kubelet[2889]: E1008 20:21:11.683187 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:21:11.683477 kubelet[2889]: E1008 20:21:11.683384 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:21:11.683477 kubelet[2889]: W1008 20:21:11.683400 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:21:11.683477 kubelet[2889]: E1008 20:21:11.683412 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:21:11.683974 kubelet[2889]: E1008 20:21:11.683631 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:21:11.683974 kubelet[2889]: W1008 20:21:11.683640 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:21:11.683974 kubelet[2889]: E1008 20:21:11.683653 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:21:11.684104 kubelet[2889]: E1008 20:21:11.684028 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:21:11.684104 kubelet[2889]: W1008 20:21:11.684043 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:21:11.684104 kubelet[2889]: E1008 20:21:11.684059 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:21:11.684476 kubelet[2889]: E1008 20:21:11.684461 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:21:11.684476 kubelet[2889]: W1008 20:21:11.684474 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:21:11.684539 kubelet[2889]: E1008 20:21:11.684489 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:21:11.684746 kubelet[2889]: E1008 20:21:11.684731 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:21:11.684746 kubelet[2889]: W1008 20:21:11.684746 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:21:11.684826 kubelet[2889]: E1008 20:21:11.684760 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:21:11.685041 kubelet[2889]: E1008 20:21:11.685017 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:21:11.685041 kubelet[2889]: W1008 20:21:11.685031 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:21:11.685041 kubelet[2889]: E1008 20:21:11.685043 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:21:11.685321 kubelet[2889]: E1008 20:21:11.685288 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:21:11.685321 kubelet[2889]: W1008 20:21:11.685320 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:21:11.685397 kubelet[2889]: E1008 20:21:11.685333 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:21:11.685526 kubelet[2889]: E1008 20:21:11.685512 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:21:11.685526 kubelet[2889]: W1008 20:21:11.685525 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:21:11.685619 kubelet[2889]: E1008 20:21:11.685604 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:21:11.685817 kubelet[2889]: E1008 20:21:11.685802 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:21:11.685857 kubelet[2889]: W1008 20:21:11.685827 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:21:11.685857 kubelet[2889]: E1008 20:21:11.685840 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:21:11.686041 kubelet[2889]: E1008 20:21:11.686027 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:21:11.686041 kubelet[2889]: W1008 20:21:11.686040 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:21:11.689990 kubelet[2889]: E1008 20:21:11.686072 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:21:11.689990 kubelet[2889]: E1008 20:21:11.686263 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:21:11.689990 kubelet[2889]: W1008 20:21:11.686271 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:21:11.689990 kubelet[2889]: E1008 20:21:11.686285 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:21:11.689990 kubelet[2889]: E1008 20:21:11.686455 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:21:11.689990 kubelet[2889]: W1008 20:21:11.686463 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:21:11.689990 kubelet[2889]: E1008 20:21:11.686473 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:21:11.689990 kubelet[2889]: E1008 20:21:11.689780 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:21:11.689990 kubelet[2889]: W1008 20:21:11.689796 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:21:11.689990 kubelet[2889]: E1008 20:21:11.689818 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:21:11.690578 kubelet[2889]: E1008 20:21:11.690489 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:21:11.690578 kubelet[2889]: W1008 20:21:11.690499 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:21:11.690578 kubelet[2889]: E1008 20:21:11.690522 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:21:11.690718 kubelet[2889]: E1008 20:21:11.690665 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:21:11.690718 kubelet[2889]: W1008 20:21:11.690675 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:21:11.690718 kubelet[2889]: E1008 20:21:11.690694 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:21:11.690971 kubelet[2889]: E1008 20:21:11.690937 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:21:11.691017 kubelet[2889]: W1008 20:21:11.690950 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:21:11.691017 kubelet[2889]: E1008 20:21:11.690993 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:21:11.691191 kubelet[2889]: E1008 20:21:11.691148 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:21:11.691191 kubelet[2889]: W1008 20:21:11.691157 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:21:11.691191 kubelet[2889]: E1008 20:21:11.691175 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:21:11.691427 kubelet[2889]: E1008 20:21:11.691412 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:21:11.691427 kubelet[2889]: W1008 20:21:11.691426 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:21:11.691580 kubelet[2889]: E1008 20:21:11.691443 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:21:11.691904 kubelet[2889]: E1008 20:21:11.691736 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:21:11.691904 kubelet[2889]: W1008 20:21:11.691747 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:21:11.691904 kubelet[2889]: E1008 20:21:11.691773 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:21:11.692130 kubelet[2889]: E1008 20:21:11.692120 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:21:11.692239 kubelet[2889]: W1008 20:21:11.692189 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:21:11.692239 kubelet[2889]: E1008 20:21:11.692228 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:21:11.692595 kubelet[2889]: E1008 20:21:11.692468 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:21:11.692595 kubelet[2889]: W1008 20:21:11.692479 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:21:11.692595 kubelet[2889]: E1008 20:21:11.692509 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:21:11.692859 kubelet[2889]: E1008 20:21:11.692746 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:21:11.692859 kubelet[2889]: W1008 20:21:11.692783 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:21:11.692859 kubelet[2889]: E1008 20:21:11.692805 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:21:11.693226 kubelet[2889]: E1008 20:21:11.693123 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:21:11.693226 kubelet[2889]: W1008 20:21:11.693134 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:21:11.693226 kubelet[2889]: E1008 20:21:11.693156 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:21:11.693527 kubelet[2889]: E1008 20:21:11.693432 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:21:11.693527 kubelet[2889]: W1008 20:21:11.693443 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:21:11.693527 kubelet[2889]: E1008 20:21:11.693469 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:21:11.693838 kubelet[2889]: E1008 20:21:11.693814 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:21:11.693838 kubelet[2889]: W1008 20:21:11.693825 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:21:11.694140 kubelet[2889]: E1008 20:21:11.693992 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:21:11.694342 kubelet[2889]: E1008 20:21:11.694332 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:21:11.694514 kubelet[2889]: W1008 20:21:11.694422 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:21:11.694514 kubelet[2889]: E1008 20:21:11.694447 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:21:11.694769 kubelet[2889]: E1008 20:21:11.694758 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:21:11.694912 kubelet[2889]: W1008 20:21:11.694829 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:21:11.694912 kubelet[2889]: E1008 20:21:11.694857 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:21:11.695288 kubelet[2889]: E1008 20:21:11.695186 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:21:11.695288 kubelet[2889]: W1008 20:21:11.695196 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:21:11.695288 kubelet[2889]: E1008 20:21:11.695219 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:21:11.695820 kubelet[2889]: E1008 20:21:11.695422 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:21:11.695820 kubelet[2889]: W1008 20:21:11.695448 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:21:11.695820 kubelet[2889]: E1008 20:21:11.695463 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:21:11.697485 kubelet[2889]: E1008 20:21:11.697472 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:21:11.697556 kubelet[2889]: W1008 20:21:11.697543 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:21:11.698004 kubelet[2889]: E1008 20:21:11.697648 2889 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:21:11.702992 containerd[1597]: time="2024-10-08T20:21:11.702917215Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:21:11.704297 containerd[1597]: time="2024-10-08T20:21:11.704250208Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1: active requests=0, bytes read=5141007" Oct 8 20:21:11.705848 containerd[1597]: time="2024-10-08T20:21:11.705790203Z" level=info msg="ImageCreate event name:\"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:21:11.709344 containerd[1597]: time="2024-10-08T20:21:11.708572749Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:21:11.709344 containerd[1597]: time="2024-10-08T20:21:11.709228852Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" with image id \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\", size \"6633368\" in 1.937558136s" Oct 8 20:21:11.709344 containerd[1597]: time="2024-10-08T20:21:11.709258457Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" returns image reference \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\"" Oct 8 20:21:11.712672 containerd[1597]: time="2024-10-08T20:21:11.712631823Z" level=info msg="CreateContainer within sandbox \"6005450acf4c948cc1a916591f25a373433835911d76d4b23315d9fa9f7ea18c\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 8 20:21:11.739577 containerd[1597]: time="2024-10-08T20:21:11.739542409Z" level=info msg="CreateContainer within sandbox \"6005450acf4c948cc1a916591f25a373433835911d76d4b23315d9fa9f7ea18c\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"5304021d9054ad68c3a5338820c6a6e0311cad3bd59f4c66a9f25e0d670c6c44\"" Oct 8 20:21:11.741567 containerd[1597]: time="2024-10-08T20:21:11.741537786Z" level=info msg="StartContainer for \"5304021d9054ad68c3a5338820c6a6e0311cad3bd59f4c66a9f25e0d670c6c44\"" Oct 8 20:21:11.829238 containerd[1597]: time="2024-10-08T20:21:11.829196107Z" level=info msg="StartContainer for \"5304021d9054ad68c3a5338820c6a6e0311cad3bd59f4c66a9f25e0d670c6c44\" returns successfully" Oct 8 20:21:11.856841 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5304021d9054ad68c3a5338820c6a6e0311cad3bd59f4c66a9f25e0d670c6c44-rootfs.mount: Deactivated successfully. Oct 8 20:21:12.457018 containerd[1597]: time="2024-10-08T20:21:12.336373280Z" level=info msg="shim disconnected" id=5304021d9054ad68c3a5338820c6a6e0311cad3bd59f4c66a9f25e0d670c6c44 namespace=k8s.io Oct 8 20:21:12.457018 containerd[1597]: time="2024-10-08T20:21:12.456907532Z" level=warning msg="cleaning up after shim disconnected" id=5304021d9054ad68c3a5338820c6a6e0311cad3bd59f4c66a9f25e0d670c6c44 namespace=k8s.io Oct 8 20:21:12.457018 containerd[1597]: time="2024-10-08T20:21:12.456927500Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 20:21:12.619643 containerd[1597]: time="2024-10-08T20:21:12.619060091Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\"" Oct 8 20:21:13.398650 kubelet[2889]: E1008 20:21:13.398472 2889 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cbc7k" podUID="a5649738-e837-4158-bf1c-576a5e896847" Oct 8 20:21:15.400994 kubelet[2889]: E1008 20:21:15.399727 2889 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cbc7k" podUID="a5649738-e837-4158-bf1c-576a5e896847" Oct 8 20:21:17.398704 kubelet[2889]: E1008 20:21:17.398655 2889 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cbc7k" podUID="a5649738-e837-4158-bf1c-576a5e896847" Oct 8 20:21:18.528864 containerd[1597]: time="2024-10-08T20:21:18.528723853Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:21:18.530362 containerd[1597]: time="2024-10-08T20:21:18.530304481Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.1: active requests=0, bytes read=93083736" Oct 8 20:21:18.531646 containerd[1597]: time="2024-10-08T20:21:18.531597966Z" level=info msg="ImageCreate event name:\"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:21:18.536040 containerd[1597]: time="2024-10-08T20:21:18.535942726Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:21:18.537793 containerd[1597]: time="2024-10-08T20:21:18.536550185Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.1\" with image id \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\", size \"94576137\" in 5.917415643s" Oct 8 20:21:18.537793 containerd[1597]: time="2024-10-08T20:21:18.536599829Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\" returns image reference \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\"" Oct 8 20:21:18.540434 containerd[1597]: time="2024-10-08T20:21:18.540325579Z" level=info msg="CreateContainer within sandbox \"6005450acf4c948cc1a916591f25a373433835911d76d4b23315d9fa9f7ea18c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 8 20:21:18.573208 containerd[1597]: time="2024-10-08T20:21:18.573102274Z" level=info msg="CreateContainer within sandbox \"6005450acf4c948cc1a916591f25a373433835911d76d4b23315d9fa9f7ea18c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"3c27d2a663c07580208c6739cff4aa72086e35393bea9fff51c7ed22396fccf6\"" Oct 8 20:21:18.576105 containerd[1597]: time="2024-10-08T20:21:18.575028894Z" level=info msg="StartContainer for \"3c27d2a663c07580208c6739cff4aa72086e35393bea9fff51c7ed22396fccf6\"" Oct 8 20:21:18.691163 containerd[1597]: time="2024-10-08T20:21:18.691006368Z" level=info msg="StartContainer for \"3c27d2a663c07580208c6739cff4aa72086e35393bea9fff51c7ed22396fccf6\" returns successfully" Oct 8 20:21:19.399520 kubelet[2889]: E1008 20:21:19.398749 2889 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cbc7k" podUID="a5649738-e837-4158-bf1c-576a5e896847" Oct 8 20:21:20.323688 kubelet[2889]: I1008 20:21:20.323528 2889 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 8 20:21:20.362285 systemd-journald[1119]: Under memory pressure, flushing caches. Oct 8 20:21:20.330670 systemd-resolved[1471]: Under memory pressure, flushing caches. Oct 8 20:21:20.330804 systemd-resolved[1471]: Flushed all caches. Oct 8 20:21:20.770758 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3c27d2a663c07580208c6739cff4aa72086e35393bea9fff51c7ed22396fccf6-rootfs.mount: Deactivated successfully. Oct 8 20:21:20.789674 containerd[1597]: time="2024-10-08T20:21:20.789518095Z" level=info msg="shim disconnected" id=3c27d2a663c07580208c6739cff4aa72086e35393bea9fff51c7ed22396fccf6 namespace=k8s.io Oct 8 20:21:20.790253 containerd[1597]: time="2024-10-08T20:21:20.789745384Z" level=warning msg="cleaning up after shim disconnected" id=3c27d2a663c07580208c6739cff4aa72086e35393bea9fff51c7ed22396fccf6 namespace=k8s.io Oct 8 20:21:20.790253 containerd[1597]: time="2024-10-08T20:21:20.789759872Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 20:21:20.804498 containerd[1597]: time="2024-10-08T20:21:20.804455754Z" level=warning msg="cleanup warnings time=\"2024-10-08T20:21:20Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Oct 8 20:21:20.842153 kubelet[2889]: I1008 20:21:20.842051 2889 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Oct 8 20:21:20.884343 kubelet[2889]: I1008 20:21:20.884289 2889 topology_manager.go:215] "Topology Admit Handler" podUID="69d7ca85-a209-4e6f-9a16-eb38c4d84f95" podNamespace="kube-system" podName="coredns-76f75df574-zsvnc" Oct 8 20:21:20.890606 kubelet[2889]: I1008 20:21:20.888211 2889 topology_manager.go:215] "Topology Admit Handler" podUID="8cc7ecf6-b867-4caf-8b32-1704c973cd44" podNamespace="calico-system" podName="calico-kube-controllers-6644949fbd-28g75" Oct 8 20:21:20.903199 kubelet[2889]: I1008 20:21:20.903119 2889 topology_manager.go:215] "Topology Admit Handler" podUID="2e3f14d6-abc0-4ffa-afc1-278f613cc677" podNamespace="kube-system" podName="coredns-76f75df574-wdqtm" Oct 8 20:21:20.976066 kubelet[2889]: I1008 20:21:20.975901 2889 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkdmr\" (UniqueName: \"kubernetes.io/projected/2e3f14d6-abc0-4ffa-afc1-278f613cc677-kube-api-access-fkdmr\") pod \"coredns-76f75df574-wdqtm\" (UID: \"2e3f14d6-abc0-4ffa-afc1-278f613cc677\") " pod="kube-system/coredns-76f75df574-wdqtm" Oct 8 20:21:20.976066 kubelet[2889]: I1008 20:21:20.975971 2889 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6wwr\" (UniqueName: \"kubernetes.io/projected/69d7ca85-a209-4e6f-9a16-eb38c4d84f95-kube-api-access-g6wwr\") pod \"coredns-76f75df574-zsvnc\" (UID: \"69d7ca85-a209-4e6f-9a16-eb38c4d84f95\") " pod="kube-system/coredns-76f75df574-zsvnc" Oct 8 20:21:20.976066 kubelet[2889]: I1008 20:21:20.976003 2889 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtjfv\" (UniqueName: \"kubernetes.io/projected/8cc7ecf6-b867-4caf-8b32-1704c973cd44-kube-api-access-gtjfv\") pod \"calico-kube-controllers-6644949fbd-28g75\" (UID: \"8cc7ecf6-b867-4caf-8b32-1704c973cd44\") " pod="calico-system/calico-kube-controllers-6644949fbd-28g75" Oct 8 20:21:20.976066 kubelet[2889]: I1008 20:21:20.976028 2889 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8cc7ecf6-b867-4caf-8b32-1704c973cd44-tigera-ca-bundle\") pod \"calico-kube-controllers-6644949fbd-28g75\" (UID: \"8cc7ecf6-b867-4caf-8b32-1704c973cd44\") " pod="calico-system/calico-kube-controllers-6644949fbd-28g75" Oct 8 20:21:20.976066 kubelet[2889]: I1008 20:21:20.976053 2889 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e3f14d6-abc0-4ffa-afc1-278f613cc677-config-volume\") pod \"coredns-76f75df574-wdqtm\" (UID: \"2e3f14d6-abc0-4ffa-afc1-278f613cc677\") " pod="kube-system/coredns-76f75df574-wdqtm" Oct 8 20:21:20.976544 kubelet[2889]: I1008 20:21:20.976076 2889 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/69d7ca85-a209-4e6f-9a16-eb38c4d84f95-config-volume\") pod \"coredns-76f75df574-zsvnc\" (UID: \"69d7ca85-a209-4e6f-9a16-eb38c4d84f95\") " pod="kube-system/coredns-76f75df574-zsvnc" Oct 8 20:21:21.217463 containerd[1597]: time="2024-10-08T20:21:21.216746169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-zsvnc,Uid:69d7ca85-a209-4e6f-9a16-eb38c4d84f95,Namespace:kube-system,Attempt:0,}" Oct 8 20:21:21.217463 containerd[1597]: time="2024-10-08T20:21:21.216907373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6644949fbd-28g75,Uid:8cc7ecf6-b867-4caf-8b32-1704c973cd44,Namespace:calico-system,Attempt:0,}" Oct 8 20:21:21.217722 containerd[1597]: time="2024-10-08T20:21:21.217459757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-wdqtm,Uid:2e3f14d6-abc0-4ffa-afc1-278f613cc677,Namespace:kube-system,Attempt:0,}" Oct 8 20:21:21.409717 containerd[1597]: time="2024-10-08T20:21:21.408993226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cbc7k,Uid:a5649738-e837-4158-bf1c-576a5e896847,Namespace:calico-system,Attempt:0,}" Oct 8 20:21:21.629566 containerd[1597]: time="2024-10-08T20:21:21.629507395Z" level=error msg="Failed to destroy network for sandbox \"5c69c1c44bd69df7f16fa4d63e47283fcd23396e4f2de0f8bbc77d98ae66ab83\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:21:21.633229 containerd[1597]: time="2024-10-08T20:21:21.633190672Z" level=error msg="encountered an error cleaning up failed sandbox \"5c69c1c44bd69df7f16fa4d63e47283fcd23396e4f2de0f8bbc77d98ae66ab83\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:21:21.635831 containerd[1597]: time="2024-10-08T20:21:21.635792747Z" level=error msg="Failed to destroy network for sandbox \"34fdef20a54d7ab59fcd701ff75013b89f11089c2d3ea96cebe455d87fa7afc9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:21:21.636136 containerd[1597]: time="2024-10-08T20:21:21.636100758Z" level=error msg="encountered an error cleaning up failed sandbox \"34fdef20a54d7ab59fcd701ff75013b89f11089c2d3ea96cebe455d87fa7afc9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:21:21.642246 containerd[1597]: time="2024-10-08T20:21:21.641962250Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6644949fbd-28g75,Uid:8cc7ecf6-b867-4caf-8b32-1704c973cd44,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"34fdef20a54d7ab59fcd701ff75013b89f11089c2d3ea96cebe455d87fa7afc9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:21:21.642246 containerd[1597]: time="2024-10-08T20:21:21.642168179Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-wdqtm,Uid:2e3f14d6-abc0-4ffa-afc1-278f613cc677,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5c69c1c44bd69df7f16fa4d63e47283fcd23396e4f2de0f8bbc77d98ae66ab83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:21:21.649274 containerd[1597]: time="2024-10-08T20:21:21.649228695Z" level=error msg="Failed to destroy network for sandbox \"b0b3e3d6b839ae2ba1a229ae13887b655d0a16b532ca8f3d8895320833a6e987\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:21:21.650008 containerd[1597]: time="2024-10-08T20:21:21.649678825Z" level=error msg="encountered an error cleaning up failed sandbox \"b0b3e3d6b839ae2ba1a229ae13887b655d0a16b532ca8f3d8895320833a6e987\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:21:21.650008 containerd[1597]: time="2024-10-08T20:21:21.649731394Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cbc7k,Uid:a5649738-e837-4158-bf1c-576a5e896847,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b0b3e3d6b839ae2ba1a229ae13887b655d0a16b532ca8f3d8895320833a6e987\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:21:21.650008 containerd[1597]: time="2024-10-08T20:21:21.649877881Z" level=error msg="Failed to destroy network for sandbox \"4aad6bd594c4808843f7e6b5ae7b1ea6a1e41b402c15ea308fe89f81bde7df2e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:21:21.650289 containerd[1597]: time="2024-10-08T20:21:21.650262948Z" level=error msg="encountered an error cleaning up failed sandbox \"4aad6bd594c4808843f7e6b5ae7b1ea6a1e41b402c15ea308fe89f81bde7df2e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:21:21.650379 containerd[1597]: time="2024-10-08T20:21:21.650357637Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-zsvnc,Uid:69d7ca85-a209-4e6f-9a16-eb38c4d84f95,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4aad6bd594c4808843f7e6b5ae7b1ea6a1e41b402c15ea308fe89f81bde7df2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:21:21.651260 kubelet[2889]: E1008 20:21:21.651236 2889 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4aad6bd594c4808843f7e6b5ae7b1ea6a1e41b402c15ea308fe89f81bde7df2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:21:21.651405 kubelet[2889]: E1008 20:21:21.651370 2889 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34fdef20a54d7ab59fcd701ff75013b89f11089c2d3ea96cebe455d87fa7afc9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:21:21.651458 kubelet[2889]: E1008 20:21:21.651436 2889 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34fdef20a54d7ab59fcd701ff75013b89f11089c2d3ea96cebe455d87fa7afc9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6644949fbd-28g75" Oct 8 20:21:21.651497 kubelet[2889]: E1008 20:21:21.651462 2889 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34fdef20a54d7ab59fcd701ff75013b89f11089c2d3ea96cebe455d87fa7afc9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6644949fbd-28g75" Oct 8 20:21:21.651527 kubelet[2889]: E1008 20:21:21.651520 2889 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6644949fbd-28g75_calico-system(8cc7ecf6-b867-4caf-8b32-1704c973cd44)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6644949fbd-28g75_calico-system(8cc7ecf6-b867-4caf-8b32-1704c973cd44)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"34fdef20a54d7ab59fcd701ff75013b89f11089c2d3ea96cebe455d87fa7afc9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6644949fbd-28g75" podUID="8cc7ecf6-b867-4caf-8b32-1704c973cd44" Oct 8 20:21:21.652149 kubelet[2889]: E1008 20:21:21.651636 2889 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4aad6bd594c4808843f7e6b5ae7b1ea6a1e41b402c15ea308fe89f81bde7df2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-zsvnc" Oct 8 20:21:21.652149 kubelet[2889]: E1008 20:21:21.651672 2889 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4aad6bd594c4808843f7e6b5ae7b1ea6a1e41b402c15ea308fe89f81bde7df2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-zsvnc" Oct 8 20:21:21.652149 kubelet[2889]: E1008 20:21:21.651798 2889 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c69c1c44bd69df7f16fa4d63e47283fcd23396e4f2de0f8bbc77d98ae66ab83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:21:21.652149 kubelet[2889]: E1008 20:21:21.651826 2889 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c69c1c44bd69df7f16fa4d63e47283fcd23396e4f2de0f8bbc77d98ae66ab83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-wdqtm" Oct 8 20:21:21.652286 kubelet[2889]: E1008 20:21:21.651847 2889 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-zsvnc_kube-system(69d7ca85-a209-4e6f-9a16-eb38c4d84f95)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-zsvnc_kube-system(69d7ca85-a209-4e6f-9a16-eb38c4d84f95)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4aad6bd594c4808843f7e6b5ae7b1ea6a1e41b402c15ea308fe89f81bde7df2e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-zsvnc" podUID="69d7ca85-a209-4e6f-9a16-eb38c4d84f95" Oct 8 20:21:21.652286 kubelet[2889]: E1008 20:21:21.651851 2889 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c69c1c44bd69df7f16fa4d63e47283fcd23396e4f2de0f8bbc77d98ae66ab83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-wdqtm" Oct 8 20:21:21.652286 kubelet[2889]: E1008 20:21:21.652120 2889 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0b3e3d6b839ae2ba1a229ae13887b655d0a16b532ca8f3d8895320833a6e987\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:21:21.653057 kubelet[2889]: E1008 20:21:21.652494 2889 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0b3e3d6b839ae2ba1a229ae13887b655d0a16b532ca8f3d8895320833a6e987\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cbc7k" Oct 8 20:21:21.653057 kubelet[2889]: E1008 20:21:21.652559 2889 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0b3e3d6b839ae2ba1a229ae13887b655d0a16b532ca8f3d8895320833a6e987\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cbc7k" Oct 8 20:21:21.653057 kubelet[2889]: E1008 20:21:21.652644 2889 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-cbc7k_calico-system(a5649738-e837-4158-bf1c-576a5e896847)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-cbc7k_calico-system(a5649738-e837-4158-bf1c-576a5e896847)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b0b3e3d6b839ae2ba1a229ae13887b655d0a16b532ca8f3d8895320833a6e987\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-cbc7k" podUID="a5649738-e837-4158-bf1c-576a5e896847" Oct 8 20:21:21.653356 kubelet[2889]: E1008 20:21:21.651897 2889 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-wdqtm_kube-system(2e3f14d6-abc0-4ffa-afc1-278f613cc677)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-wdqtm_kube-system(2e3f14d6-abc0-4ffa-afc1-278f613cc677)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5c69c1c44bd69df7f16fa4d63e47283fcd23396e4f2de0f8bbc77d98ae66ab83\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-wdqtm" podUID="2e3f14d6-abc0-4ffa-afc1-278f613cc677" Oct 8 20:21:21.655368 kubelet[2889]: I1008 20:21:21.655346 2889 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34fdef20a54d7ab59fcd701ff75013b89f11089c2d3ea96cebe455d87fa7afc9" Oct 8 20:21:21.657053 kubelet[2889]: I1008 20:21:21.657033 2889 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4aad6bd594c4808843f7e6b5ae7b1ea6a1e41b402c15ea308fe89f81bde7df2e" Oct 8 20:21:21.665770 containerd[1597]: time="2024-10-08T20:21:21.665107477Z" level=info msg="StopPodSandbox for \"34fdef20a54d7ab59fcd701ff75013b89f11089c2d3ea96cebe455d87fa7afc9\"" Oct 8 20:21:21.665893 containerd[1597]: time="2024-10-08T20:21:21.665774427Z" level=info msg="StopPodSandbox for \"4aad6bd594c4808843f7e6b5ae7b1ea6a1e41b402c15ea308fe89f81bde7df2e\"" Oct 8 20:21:21.669503 containerd[1597]: time="2024-10-08T20:21:21.667606637Z" level=info msg="Ensure that sandbox 4aad6bd594c4808843f7e6b5ae7b1ea6a1e41b402c15ea308fe89f81bde7df2e in task-service has been cleanup successfully" Oct 8 20:21:21.669613 kubelet[2889]: I1008 20:21:21.667817 2889 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c69c1c44bd69df7f16fa4d63e47283fcd23396e4f2de0f8bbc77d98ae66ab83" Oct 8 20:21:21.669999 containerd[1597]: time="2024-10-08T20:21:21.667607258Z" level=info msg="Ensure that sandbox 34fdef20a54d7ab59fcd701ff75013b89f11089c2d3ea96cebe455d87fa7afc9 in task-service has been cleanup successfully" Oct 8 20:21:21.670437 containerd[1597]: time="2024-10-08T20:21:21.670396136Z" level=info msg="StopPodSandbox for \"5c69c1c44bd69df7f16fa4d63e47283fcd23396e4f2de0f8bbc77d98ae66ab83\"" Oct 8 20:21:21.672814 containerd[1597]: time="2024-10-08T20:21:21.672770571Z" level=info msg="Ensure that sandbox 5c69c1c44bd69df7f16fa4d63e47283fcd23396e4f2de0f8bbc77d98ae66ab83 in task-service has been cleanup successfully" Oct 8 20:21:21.685155 kubelet[2889]: I1008 20:21:21.685136 2889 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0b3e3d6b839ae2ba1a229ae13887b655d0a16b532ca8f3d8895320833a6e987" Oct 8 20:21:21.687346 containerd[1597]: time="2024-10-08T20:21:21.687195116Z" level=info msg="StopPodSandbox for \"b0b3e3d6b839ae2ba1a229ae13887b655d0a16b532ca8f3d8895320833a6e987\"" Oct 8 20:21:21.687705 containerd[1597]: time="2024-10-08T20:21:21.687503819Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\"" Oct 8 20:21:21.688624 containerd[1597]: time="2024-10-08T20:21:21.688603626Z" level=info msg="Ensure that sandbox b0b3e3d6b839ae2ba1a229ae13887b655d0a16b532ca8f3d8895320833a6e987 in task-service has been cleanup successfully" Oct 8 20:21:21.751224 containerd[1597]: time="2024-10-08T20:21:21.751140456Z" level=error msg="StopPodSandbox for \"4aad6bd594c4808843f7e6b5ae7b1ea6a1e41b402c15ea308fe89f81bde7df2e\" failed" error="failed to destroy network for sandbox \"4aad6bd594c4808843f7e6b5ae7b1ea6a1e41b402c15ea308fe89f81bde7df2e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:21:21.751748 kubelet[2889]: E1008 20:21:21.751645 2889 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4aad6bd594c4808843f7e6b5ae7b1ea6a1e41b402c15ea308fe89f81bde7df2e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4aad6bd594c4808843f7e6b5ae7b1ea6a1e41b402c15ea308fe89f81bde7df2e" Oct 8 20:21:21.752309 kubelet[2889]: E1008 20:21:21.752056 2889 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4aad6bd594c4808843f7e6b5ae7b1ea6a1e41b402c15ea308fe89f81bde7df2e"} Oct 8 20:21:21.752309 kubelet[2889]: E1008 20:21:21.752112 2889 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"69d7ca85-a209-4e6f-9a16-eb38c4d84f95\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4aad6bd594c4808843f7e6b5ae7b1ea6a1e41b402c15ea308fe89f81bde7df2e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 20:21:21.752680 kubelet[2889]: E1008 20:21:21.752427 2889 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"69d7ca85-a209-4e6f-9a16-eb38c4d84f95\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4aad6bd594c4808843f7e6b5ae7b1ea6a1e41b402c15ea308fe89f81bde7df2e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-zsvnc" podUID="69d7ca85-a209-4e6f-9a16-eb38c4d84f95" Oct 8 20:21:21.763017 containerd[1597]: time="2024-10-08T20:21:21.762939913Z" level=error msg="StopPodSandbox for \"34fdef20a54d7ab59fcd701ff75013b89f11089c2d3ea96cebe455d87fa7afc9\" failed" error="failed to destroy network for sandbox \"34fdef20a54d7ab59fcd701ff75013b89f11089c2d3ea96cebe455d87fa7afc9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:21:21.763548 kubelet[2889]: E1008 20:21:21.763381 2889 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"34fdef20a54d7ab59fcd701ff75013b89f11089c2d3ea96cebe455d87fa7afc9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="34fdef20a54d7ab59fcd701ff75013b89f11089c2d3ea96cebe455d87fa7afc9" Oct 8 20:21:21.763548 kubelet[2889]: E1008 20:21:21.763423 2889 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"34fdef20a54d7ab59fcd701ff75013b89f11089c2d3ea96cebe455d87fa7afc9"} Oct 8 20:21:21.763548 kubelet[2889]: E1008 20:21:21.763470 2889 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8cc7ecf6-b867-4caf-8b32-1704c973cd44\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"34fdef20a54d7ab59fcd701ff75013b89f11089c2d3ea96cebe455d87fa7afc9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 20:21:21.763548 kubelet[2889]: E1008 20:21:21.763505 2889 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8cc7ecf6-b867-4caf-8b32-1704c973cd44\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"34fdef20a54d7ab59fcd701ff75013b89f11089c2d3ea96cebe455d87fa7afc9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6644949fbd-28g75" podUID="8cc7ecf6-b867-4caf-8b32-1704c973cd44" Oct 8 20:21:21.773706 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4aad6bd594c4808843f7e6b5ae7b1ea6a1e41b402c15ea308fe89f81bde7df2e-shm.mount: Deactivated successfully. Oct 8 20:21:21.776860 containerd[1597]: time="2024-10-08T20:21:21.776810502Z" level=error msg="StopPodSandbox for \"5c69c1c44bd69df7f16fa4d63e47283fcd23396e4f2de0f8bbc77d98ae66ab83\" failed" error="failed to destroy network for sandbox \"5c69c1c44bd69df7f16fa4d63e47283fcd23396e4f2de0f8bbc77d98ae66ab83\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:21:21.777414 kubelet[2889]: E1008 20:21:21.777393 2889 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5c69c1c44bd69df7f16fa4d63e47283fcd23396e4f2de0f8bbc77d98ae66ab83\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5c69c1c44bd69df7f16fa4d63e47283fcd23396e4f2de0f8bbc77d98ae66ab83" Oct 8 20:21:21.777745 kubelet[2889]: E1008 20:21:21.777731 2889 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5c69c1c44bd69df7f16fa4d63e47283fcd23396e4f2de0f8bbc77d98ae66ab83"} Oct 8 20:21:21.777849 kubelet[2889]: E1008 20:21:21.777839 2889 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2e3f14d6-abc0-4ffa-afc1-278f613cc677\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5c69c1c44bd69df7f16fa4d63e47283fcd23396e4f2de0f8bbc77d98ae66ab83\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 20:21:21.778006 kubelet[2889]: E1008 20:21:21.777993 2889 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2e3f14d6-abc0-4ffa-afc1-278f613cc677\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5c69c1c44bd69df7f16fa4d63e47283fcd23396e4f2de0f8bbc77d98ae66ab83\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-wdqtm" podUID="2e3f14d6-abc0-4ffa-afc1-278f613cc677" Oct 8 20:21:21.779870 containerd[1597]: time="2024-10-08T20:21:21.779771985Z" level=error msg="StopPodSandbox for \"b0b3e3d6b839ae2ba1a229ae13887b655d0a16b532ca8f3d8895320833a6e987\" failed" error="failed to destroy network for sandbox \"b0b3e3d6b839ae2ba1a229ae13887b655d0a16b532ca8f3d8895320833a6e987\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:21:21.780179 kubelet[2889]: E1008 20:21:21.780049 2889 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b0b3e3d6b839ae2ba1a229ae13887b655d0a16b532ca8f3d8895320833a6e987\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b0b3e3d6b839ae2ba1a229ae13887b655d0a16b532ca8f3d8895320833a6e987" Oct 8 20:21:21.780179 kubelet[2889]: E1008 20:21:21.780076 2889 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b0b3e3d6b839ae2ba1a229ae13887b655d0a16b532ca8f3d8895320833a6e987"} Oct 8 20:21:21.780179 kubelet[2889]: E1008 20:21:21.780116 2889 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a5649738-e837-4158-bf1c-576a5e896847\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b0b3e3d6b839ae2ba1a229ae13887b655d0a16b532ca8f3d8895320833a6e987\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 20:21:21.780179 kubelet[2889]: E1008 20:21:21.780145 2889 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a5649738-e837-4158-bf1c-576a5e896847\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b0b3e3d6b839ae2ba1a229ae13887b655d0a16b532ca8f3d8895320833a6e987\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-cbc7k" podUID="a5649738-e837-4158-bf1c-576a5e896847" Oct 8 20:21:30.301327 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2895222203.mount: Deactivated successfully. Oct 8 20:21:30.313343 systemd-resolved[1471]: Under memory pressure, flushing caches. Oct 8 20:21:30.313378 systemd-resolved[1471]: Flushed all caches. Oct 8 20:21:30.314994 systemd-journald[1119]: Under memory pressure, flushing caches. Oct 8 20:21:30.611841 containerd[1597]: time="2024-10-08T20:21:30.591611644Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:21:30.614079 containerd[1597]: time="2024-10-08T20:21:30.515469499Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.1: active requests=0, bytes read=117873564" Oct 8 20:21:30.655875 containerd[1597]: time="2024-10-08T20:21:30.655831021Z" level=info msg="ImageCreate event name:\"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:21:30.658996 containerd[1597]: time="2024-10-08T20:21:30.658941087Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:21:30.663388 containerd[1597]: time="2024-10-08T20:21:30.663338631Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.1\" with image id \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\", size \"117873426\" in 8.972028694s" Oct 8 20:21:30.663450 containerd[1597]: time="2024-10-08T20:21:30.663389538Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\" returns image reference \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\"" Oct 8 20:21:30.786426 containerd[1597]: time="2024-10-08T20:21:30.786327977Z" level=info msg="CreateContainer within sandbox \"6005450acf4c948cc1a916591f25a373433835911d76d4b23315d9fa9f7ea18c\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 8 20:21:30.957648 containerd[1597]: time="2024-10-08T20:21:30.957514531Z" level=info msg="CreateContainer within sandbox \"6005450acf4c948cc1a916591f25a373433835911d76d4b23315d9fa9f7ea18c\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"2b94b93f610021dc0adc10d1f151a20eadabe3c025826f502d451c41f8afd98c\"" Oct 8 20:21:30.960263 containerd[1597]: time="2024-10-08T20:21:30.960209854Z" level=info msg="StartContainer for \"2b94b93f610021dc0adc10d1f151a20eadabe3c025826f502d451c41f8afd98c\"" Oct 8 20:21:31.220939 containerd[1597]: time="2024-10-08T20:21:31.220820089Z" level=info msg="StartContainer for \"2b94b93f610021dc0adc10d1f151a20eadabe3c025826f502d451c41f8afd98c\" returns successfully" Oct 8 20:21:31.337280 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 8 20:21:31.337507 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 8 20:21:31.823136 kubelet[2889]: I1008 20:21:31.823020 2889 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-vswfv" podStartSLOduration=1.746877208 podStartE2EDuration="38.811737618s" podCreationTimestamp="2024-10-08 20:20:53 +0000 UTC" firstStartedPulling="2024-10-08 20:20:53.602049387 +0000 UTC m=+21.545999379" lastFinishedPulling="2024-10-08 20:21:30.666909797 +0000 UTC m=+58.610859789" observedRunningTime="2024-10-08 20:21:31.803014026 +0000 UTC m=+59.746964028" watchObservedRunningTime="2024-10-08 20:21:31.811737618 +0000 UTC m=+59.755687620" Oct 8 20:21:32.363089 systemd-journald[1119]: Under memory pressure, flushing caches. Oct 8 20:21:32.363041 systemd-resolved[1471]: Under memory pressure, flushing caches. Oct 8 20:21:32.363060 systemd-resolved[1471]: Flushed all caches. Oct 8 20:21:32.422756 containerd[1597]: time="2024-10-08T20:21:32.421499563Z" level=info msg="StopPodSandbox for \"b0b3e3d6b839ae2ba1a229ae13887b655d0a16b532ca8f3d8895320833a6e987\"" Oct 8 20:21:33.714000 containerd[1597]: 2024-10-08 20:21:32.548 [INFO][3911] k8s.go 608: Cleaning up netns ContainerID="b0b3e3d6b839ae2ba1a229ae13887b655d0a16b532ca8f3d8895320833a6e987" Oct 8 20:21:33.714000 containerd[1597]: 2024-10-08 20:21:32.549 [INFO][3911] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="b0b3e3d6b839ae2ba1a229ae13887b655d0a16b532ca8f3d8895320833a6e987" iface="eth0" netns="/var/run/netns/cni-6de11da4-fc6a-da39-f34a-bbf3c5cde082" Oct 8 20:21:33.714000 containerd[1597]: 2024-10-08 20:21:32.550 [INFO][3911] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="b0b3e3d6b839ae2ba1a229ae13887b655d0a16b532ca8f3d8895320833a6e987" iface="eth0" netns="/var/run/netns/cni-6de11da4-fc6a-da39-f34a-bbf3c5cde082" Oct 8 20:21:33.714000 containerd[1597]: 2024-10-08 20:21:32.552 [INFO][3911] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="b0b3e3d6b839ae2ba1a229ae13887b655d0a16b532ca8f3d8895320833a6e987" iface="eth0" netns="/var/run/netns/cni-6de11da4-fc6a-da39-f34a-bbf3c5cde082" Oct 8 20:21:33.714000 containerd[1597]: 2024-10-08 20:21:32.552 [INFO][3911] k8s.go 615: Releasing IP address(es) ContainerID="b0b3e3d6b839ae2ba1a229ae13887b655d0a16b532ca8f3d8895320833a6e987" Oct 8 20:21:33.714000 containerd[1597]: 2024-10-08 20:21:32.552 [INFO][3911] utils.go 188: Calico CNI releasing IP address ContainerID="b0b3e3d6b839ae2ba1a229ae13887b655d0a16b532ca8f3d8895320833a6e987" Oct 8 20:21:33.714000 containerd[1597]: 2024-10-08 20:21:33.671 [INFO][3917] ipam_plugin.go 417: Releasing address using handleID ContainerID="b0b3e3d6b839ae2ba1a229ae13887b655d0a16b532ca8f3d8895320833a6e987" HandleID="k8s-pod-network.b0b3e3d6b839ae2ba1a229ae13887b655d0a16b532ca8f3d8895320833a6e987" Workload="ci--4081--1--0--6--0b75032dd1.novalocal-k8s-csi--node--driver--cbc7k-eth0" Oct 8 20:21:33.714000 containerd[1597]: 2024-10-08 20:21:33.675 [INFO][3917] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:21:33.714000 containerd[1597]: 2024-10-08 20:21:33.676 [INFO][3917] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:21:33.714000 containerd[1597]: 2024-10-08 20:21:33.699 [WARNING][3917] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="b0b3e3d6b839ae2ba1a229ae13887b655d0a16b532ca8f3d8895320833a6e987" HandleID="k8s-pod-network.b0b3e3d6b839ae2ba1a229ae13887b655d0a16b532ca8f3d8895320833a6e987" Workload="ci--4081--1--0--6--0b75032dd1.novalocal-k8s-csi--node--driver--cbc7k-eth0" Oct 8 20:21:33.714000 containerd[1597]: 2024-10-08 20:21:33.699 [INFO][3917] ipam_plugin.go 445: Releasing address using workloadID ContainerID="b0b3e3d6b839ae2ba1a229ae13887b655d0a16b532ca8f3d8895320833a6e987" HandleID="k8s-pod-network.b0b3e3d6b839ae2ba1a229ae13887b655d0a16b532ca8f3d8895320833a6e987" Workload="ci--4081--1--0--6--0b75032dd1.novalocal-k8s-csi--node--driver--cbc7k-eth0" Oct 8 20:21:33.714000 containerd[1597]: 2024-10-08 20:21:33.701 [INFO][3917] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:21:33.714000 containerd[1597]: 2024-10-08 20:21:33.710 [INFO][3911] k8s.go 621: Teardown processing complete. ContainerID="b0b3e3d6b839ae2ba1a229ae13887b655d0a16b532ca8f3d8895320833a6e987" Oct 8 20:21:33.718106 containerd[1597]: time="2024-10-08T20:21:33.714172327Z" level=info msg="TearDown network for sandbox \"b0b3e3d6b839ae2ba1a229ae13887b655d0a16b532ca8f3d8895320833a6e987\" successfully" Oct 8 20:21:33.718106 containerd[1597]: time="2024-10-08T20:21:33.714200269Z" level=info msg="StopPodSandbox for \"b0b3e3d6b839ae2ba1a229ae13887b655d0a16b532ca8f3d8895320833a6e987\" returns successfully" Oct 8 20:21:33.719063 systemd[1]: run-netns-cni\x2d6de11da4\x2dfc6a\x2dda39\x2df34a\x2dbbf3c5cde082.mount: Deactivated successfully. Oct 8 20:21:33.737114 containerd[1597]: time="2024-10-08T20:21:33.735031454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cbc7k,Uid:a5649738-e837-4158-bf1c-576a5e896847,Namespace:calico-system,Attempt:1,}" Oct 8 20:21:33.775054 kernel: bpftool[4043]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Oct 8 20:21:33.986217 systemd-networkd[1198]: cali5c98545cb64: Link UP Oct 8 20:21:33.986398 systemd-networkd[1198]: cali5c98545cb64: Gained carrier Oct 8 20:21:34.010469 containerd[1597]: 2024-10-08 20:21:33.857 [INFO][4050] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--1--0--6--0b75032dd1.novalocal-k8s-csi--node--driver--cbc7k-eth0 csi-node-driver- calico-system a5649738-e837-4158-bf1c-576a5e896847 718 0 2024-10-08 20:20:53 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:78cd84fb8c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ci-4081-1-0-6-0b75032dd1.novalocal csi-node-driver-cbc7k eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali5c98545cb64 [] []}} ContainerID="be276f276decd07f604d09bebaa98629b68245166544867f68c2050714b3289c" Namespace="calico-system" Pod="csi-node-driver-cbc7k" WorkloadEndpoint="ci--4081--1--0--6--0b75032dd1.novalocal-k8s-csi--node--driver--cbc7k-" Oct 8 20:21:34.010469 containerd[1597]: 2024-10-08 20:21:33.857 [INFO][4050] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="be276f276decd07f604d09bebaa98629b68245166544867f68c2050714b3289c" Namespace="calico-system" Pod="csi-node-driver-cbc7k" WorkloadEndpoint="ci--4081--1--0--6--0b75032dd1.novalocal-k8s-csi--node--driver--cbc7k-eth0" Oct 8 20:21:34.010469 containerd[1597]: 2024-10-08 20:21:33.907 [INFO][4056] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="be276f276decd07f604d09bebaa98629b68245166544867f68c2050714b3289c" HandleID="k8s-pod-network.be276f276decd07f604d09bebaa98629b68245166544867f68c2050714b3289c" Workload="ci--4081--1--0--6--0b75032dd1.novalocal-k8s-csi--node--driver--cbc7k-eth0" Oct 8 20:21:34.010469 containerd[1597]: 2024-10-08 20:21:33.919 [INFO][4056] ipam_plugin.go 270: Auto assigning IP ContainerID="be276f276decd07f604d09bebaa98629b68245166544867f68c2050714b3289c" HandleID="k8s-pod-network.be276f276decd07f604d09bebaa98629b68245166544867f68c2050714b3289c" Workload="ci--4081--1--0--6--0b75032dd1.novalocal-k8s-csi--node--driver--cbc7k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002edaf0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-1-0-6-0b75032dd1.novalocal", "pod":"csi-node-driver-cbc7k", "timestamp":"2024-10-08 20:21:33.907713302 +0000 UTC"}, Hostname:"ci-4081-1-0-6-0b75032dd1.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 20:21:34.010469 containerd[1597]: 2024-10-08 20:21:33.919 [INFO][4056] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:21:34.010469 containerd[1597]: 2024-10-08 20:21:33.919 [INFO][4056] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:21:34.010469 containerd[1597]: 2024-10-08 20:21:33.919 [INFO][4056] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-1-0-6-0b75032dd1.novalocal' Oct 8 20:21:34.010469 containerd[1597]: 2024-10-08 20:21:33.922 [INFO][4056] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.be276f276decd07f604d09bebaa98629b68245166544867f68c2050714b3289c" host="ci-4081-1-0-6-0b75032dd1.novalocal" Oct 8 20:21:34.010469 containerd[1597]: 2024-10-08 20:21:33.938 [INFO][4056] ipam.go 372: Looking up existing affinities for host host="ci-4081-1-0-6-0b75032dd1.novalocal" Oct 8 20:21:34.010469 containerd[1597]: 2024-10-08 20:21:33.942 [INFO][4056] ipam.go 489: Trying affinity for 192.168.118.64/26 host="ci-4081-1-0-6-0b75032dd1.novalocal" Oct 8 20:21:34.010469 containerd[1597]: 2024-10-08 20:21:33.945 [INFO][4056] ipam.go 155: Attempting to load block cidr=192.168.118.64/26 host="ci-4081-1-0-6-0b75032dd1.novalocal" Oct 8 20:21:34.010469 containerd[1597]: 2024-10-08 20:21:33.947 [INFO][4056] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.118.64/26 host="ci-4081-1-0-6-0b75032dd1.novalocal" Oct 8 20:21:34.010469 containerd[1597]: 2024-10-08 20:21:33.947 [INFO][4056] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.118.64/26 handle="k8s-pod-network.be276f276decd07f604d09bebaa98629b68245166544867f68c2050714b3289c" host="ci-4081-1-0-6-0b75032dd1.novalocal" Oct 8 20:21:34.010469 containerd[1597]: 2024-10-08 20:21:33.949 [INFO][4056] ipam.go 1685: Creating new handle: k8s-pod-network.be276f276decd07f604d09bebaa98629b68245166544867f68c2050714b3289c Oct 8 20:21:34.010469 containerd[1597]: 2024-10-08 20:21:33.954 [INFO][4056] ipam.go 1203: Writing block in order to claim IPs block=192.168.118.64/26 handle="k8s-pod-network.be276f276decd07f604d09bebaa98629b68245166544867f68c2050714b3289c" host="ci-4081-1-0-6-0b75032dd1.novalocal" Oct 8 20:21:34.010469 containerd[1597]: 2024-10-08 20:21:33.962 [INFO][4056] ipam.go 1216: Successfully claimed IPs: [192.168.118.65/26] block=192.168.118.64/26 handle="k8s-pod-network.be276f276decd07f604d09bebaa98629b68245166544867f68c2050714b3289c" host="ci-4081-1-0-6-0b75032dd1.novalocal" Oct 8 20:21:34.010469 containerd[1597]: 2024-10-08 20:21:33.962 [INFO][4056] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.118.65/26] handle="k8s-pod-network.be276f276decd07f604d09bebaa98629b68245166544867f68c2050714b3289c" host="ci-4081-1-0-6-0b75032dd1.novalocal" Oct 8 20:21:34.010469 containerd[1597]: 2024-10-08 20:21:33.962 [INFO][4056] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:21:34.010469 containerd[1597]: 2024-10-08 20:21:33.962 [INFO][4056] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.118.65/26] IPv6=[] ContainerID="be276f276decd07f604d09bebaa98629b68245166544867f68c2050714b3289c" HandleID="k8s-pod-network.be276f276decd07f604d09bebaa98629b68245166544867f68c2050714b3289c" Workload="ci--4081--1--0--6--0b75032dd1.novalocal-k8s-csi--node--driver--cbc7k-eth0" Oct 8 20:21:34.011247 containerd[1597]: 2024-10-08 20:21:33.965 [INFO][4050] k8s.go 386: Populated endpoint ContainerID="be276f276decd07f604d09bebaa98629b68245166544867f68c2050714b3289c" Namespace="calico-system" Pod="csi-node-driver-cbc7k" WorkloadEndpoint="ci--4081--1--0--6--0b75032dd1.novalocal-k8s-csi--node--driver--cbc7k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--6--0b75032dd1.novalocal-k8s-csi--node--driver--cbc7k-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a5649738-e837-4158-bf1c-576a5e896847", ResourceVersion:"718", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 20, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-6-0b75032dd1.novalocal", ContainerID:"", Pod:"csi-node-driver-cbc7k", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.118.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali5c98545cb64", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:21:34.011247 containerd[1597]: 2024-10-08 20:21:33.965 [INFO][4050] k8s.go 387: Calico CNI using IPs: [192.168.118.65/32] ContainerID="be276f276decd07f604d09bebaa98629b68245166544867f68c2050714b3289c" Namespace="calico-system" Pod="csi-node-driver-cbc7k" WorkloadEndpoint="ci--4081--1--0--6--0b75032dd1.novalocal-k8s-csi--node--driver--cbc7k-eth0" Oct 8 20:21:34.011247 containerd[1597]: 2024-10-08 20:21:33.965 [INFO][4050] dataplane_linux.go 68: Setting the host side veth name to cali5c98545cb64 ContainerID="be276f276decd07f604d09bebaa98629b68245166544867f68c2050714b3289c" Namespace="calico-system" Pod="csi-node-driver-cbc7k" WorkloadEndpoint="ci--4081--1--0--6--0b75032dd1.novalocal-k8s-csi--node--driver--cbc7k-eth0" Oct 8 20:21:34.011247 containerd[1597]: 2024-10-08 20:21:33.977 [INFO][4050] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="be276f276decd07f604d09bebaa98629b68245166544867f68c2050714b3289c" Namespace="calico-system" Pod="csi-node-driver-cbc7k" WorkloadEndpoint="ci--4081--1--0--6--0b75032dd1.novalocal-k8s-csi--node--driver--cbc7k-eth0" Oct 8 20:21:34.011247 containerd[1597]: 2024-10-08 20:21:33.978 [INFO][4050] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="be276f276decd07f604d09bebaa98629b68245166544867f68c2050714b3289c" Namespace="calico-system" Pod="csi-node-driver-cbc7k" WorkloadEndpoint="ci--4081--1--0--6--0b75032dd1.novalocal-k8s-csi--node--driver--cbc7k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--6--0b75032dd1.novalocal-k8s-csi--node--driver--cbc7k-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a5649738-e837-4158-bf1c-576a5e896847", ResourceVersion:"718", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 20, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-6-0b75032dd1.novalocal", ContainerID:"be276f276decd07f604d09bebaa98629b68245166544867f68c2050714b3289c", Pod:"csi-node-driver-cbc7k", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.118.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali5c98545cb64", MAC:"1a:5c:5e:11:97:25", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:21:34.011247 containerd[1597]: 2024-10-08 20:21:33.999 [INFO][4050] k8s.go 500: Wrote updated endpoint to datastore ContainerID="be276f276decd07f604d09bebaa98629b68245166544867f68c2050714b3289c" Namespace="calico-system" Pod="csi-node-driver-cbc7k" WorkloadEndpoint="ci--4081--1--0--6--0b75032dd1.novalocal-k8s-csi--node--driver--cbc7k-eth0" Oct 8 20:21:34.081648 containerd[1597]: time="2024-10-08T20:21:34.081432327Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:21:34.081648 containerd[1597]: time="2024-10-08T20:21:34.081569064Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:21:34.084418 containerd[1597]: time="2024-10-08T20:21:34.081586066Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:21:34.103276 containerd[1597]: time="2024-10-08T20:21:34.089811332Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:21:34.163526 systemd-networkd[1198]: vxlan.calico: Link UP Oct 8 20:21:34.163534 systemd-networkd[1198]: vxlan.calico: Gained carrier Oct 8 20:21:34.201867 containerd[1597]: time="2024-10-08T20:21:34.201828235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cbc7k,Uid:a5649738-e837-4158-bf1c-576a5e896847,Namespace:calico-system,Attempt:1,} returns sandbox id \"be276f276decd07f604d09bebaa98629b68245166544867f68c2050714b3289c\"" Oct 8 20:21:34.210613 containerd[1597]: time="2024-10-08T20:21:34.210518247Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\"" Oct 8 20:21:34.413753 systemd-journald[1119]: Under memory pressure, flushing caches. Oct 8 20:21:34.409689 systemd-resolved[1471]: Under memory pressure, flushing caches. Oct 8 20:21:34.409733 systemd-resolved[1471]: Flushed all caches. Oct 8 20:21:35.372747 systemd-networkd[1198]: vxlan.calico: Gained IPv6LL Oct 8 20:21:35.818191 systemd-networkd[1198]: cali5c98545cb64: Gained IPv6LL Oct 8 20:21:36.277839 containerd[1597]: time="2024-10-08T20:21:36.277790994Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:21:36.279149 containerd[1597]: time="2024-10-08T20:21:36.279114227Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.1: active requests=0, bytes read=7642081" Oct 8 20:21:36.280448 containerd[1597]: time="2024-10-08T20:21:36.280405491Z" level=info msg="ImageCreate event name:\"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:21:36.283449 containerd[1597]: time="2024-10-08T20:21:36.283389385Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:21:36.284173 containerd[1597]: time="2024-10-08T20:21:36.284055591Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.1\" with image id \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\", size \"9134482\" in 2.073236897s" Oct 8 20:21:36.284173 containerd[1597]: time="2024-10-08T20:21:36.284086529Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\" returns image reference \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\"" Oct 8 20:21:36.287833 containerd[1597]: time="2024-10-08T20:21:36.287717022Z" level=info msg="CreateContainer within sandbox \"be276f276decd07f604d09bebaa98629b68245166544867f68c2050714b3289c\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Oct 8 20:21:36.313444 containerd[1597]: time="2024-10-08T20:21:36.313347189Z" level=info msg="CreateContainer within sandbox \"be276f276decd07f604d09bebaa98629b68245166544867f68c2050714b3289c\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"10eff161be610d9ca0830239517d8ad648ed74a93e774662ddffdd62c3aa5d86\"" Oct 8 20:21:36.315409 containerd[1597]: time="2024-10-08T20:21:36.313902946Z" level=info msg="StartContainer for \"10eff161be610d9ca0830239517d8ad648ed74a93e774662ddffdd62c3aa5d86\"" Oct 8 20:21:36.363005 systemd[1]: run-containerd-runc-k8s.io-10eff161be610d9ca0830239517d8ad648ed74a93e774662ddffdd62c3aa5d86-runc.NvN50H.mount: Deactivated successfully. Oct 8 20:21:36.403995 containerd[1597]: time="2024-10-08T20:21:36.403711853Z" level=info msg="StopPodSandbox for \"4aad6bd594c4808843f7e6b5ae7b1ea6a1e41b402c15ea308fe89f81bde7df2e\"" Oct 8 20:21:36.406286 containerd[1597]: time="2024-10-08T20:21:36.405858198Z" level=info msg="StopPodSandbox for \"34fdef20a54d7ab59fcd701ff75013b89f11089c2d3ea96cebe455d87fa7afc9\"" Oct 8 20:21:36.460072 systemd-journald[1119]: Under memory pressure, flushing caches. Oct 8 20:21:36.459020 systemd-resolved[1471]: Under memory pressure, flushing caches. Oct 8 20:21:36.459056 systemd-resolved[1471]: Flushed all caches. Oct 8 20:21:36.559413 containerd[1597]: time="2024-10-08T20:21:36.558663339Z" level=info msg="StartContainer for \"10eff161be610d9ca0830239517d8ad648ed74a93e774662ddffdd62c3aa5d86\" returns successfully" Oct 8 20:21:36.564713 containerd[1597]: time="2024-10-08T20:21:36.564574229Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\"" Oct 8 20:21:36.652228 containerd[1597]: 2024-10-08 20:21:36.574 [INFO][4246] k8s.go 608: Cleaning up netns ContainerID="4aad6bd594c4808843f7e6b5ae7b1ea6a1e41b402c15ea308fe89f81bde7df2e" Oct 8 20:21:36.652228 containerd[1597]: 2024-10-08 20:21:36.574 [INFO][4246] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="4aad6bd594c4808843f7e6b5ae7b1ea6a1e41b402c15ea308fe89f81bde7df2e" iface="eth0" netns="/var/run/netns/cni-16ad24d7-2838-e6bd-912d-ecad2cd0e475" Oct 8 20:21:36.652228 containerd[1597]: 2024-10-08 20:21:36.577 [INFO][4246] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="4aad6bd594c4808843f7e6b5ae7b1ea6a1e41b402c15ea308fe89f81bde7df2e" iface="eth0" netns="/var/run/netns/cni-16ad24d7-2838-e6bd-912d-ecad2cd0e475" Oct 8 20:21:36.652228 containerd[1597]: 2024-10-08 20:21:36.577 [INFO][4246] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="4aad6bd594c4808843f7e6b5ae7b1ea6a1e41b402c15ea308fe89f81bde7df2e" iface="eth0" netns="/var/run/netns/cni-16ad24d7-2838-e6bd-912d-ecad2cd0e475" Oct 8 20:21:36.652228 containerd[1597]: 2024-10-08 20:21:36.577 [INFO][4246] k8s.go 615: Releasing IP address(es) ContainerID="4aad6bd594c4808843f7e6b5ae7b1ea6a1e41b402c15ea308fe89f81bde7df2e" Oct 8 20:21:36.652228 containerd[1597]: 2024-10-08 20:21:36.577 [INFO][4246] utils.go 188: Calico CNI releasing IP address ContainerID="4aad6bd594c4808843f7e6b5ae7b1ea6a1e41b402c15ea308fe89f81bde7df2e" Oct 8 20:21:36.652228 containerd[1597]: 2024-10-08 20:21:36.624 [INFO][4262] ipam_plugin.go 417: Releasing address using handleID ContainerID="4aad6bd594c4808843f7e6b5ae7b1ea6a1e41b402c15ea308fe89f81bde7df2e" HandleID="k8s-pod-network.4aad6bd594c4808843f7e6b5ae7b1ea6a1e41b402c15ea308fe89f81bde7df2e" Workload="ci--4081--1--0--6--0b75032dd1.novalocal-k8s-coredns--76f75df574--zsvnc-eth0" Oct 8 20:21:36.652228 containerd[1597]: 2024-10-08 20:21:36.625 [INFO][4262] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:21:36.652228 containerd[1597]: 2024-10-08 20:21:36.626 [INFO][4262] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:21:36.652228 containerd[1597]: 2024-10-08 20:21:36.639 [WARNING][4262] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="4aad6bd594c4808843f7e6b5ae7b1ea6a1e41b402c15ea308fe89f81bde7df2e" HandleID="k8s-pod-network.4aad6bd594c4808843f7e6b5ae7b1ea6a1e41b402c15ea308fe89f81bde7df2e" Workload="ci--4081--1--0--6--0b75032dd1.novalocal-k8s-coredns--76f75df574--zsvnc-eth0" Oct 8 20:21:36.652228 containerd[1597]: 2024-10-08 20:21:36.639 [INFO][4262] ipam_plugin.go 445: Releasing address using workloadID ContainerID="4aad6bd594c4808843f7e6b5ae7b1ea6a1e41b402c15ea308fe89f81bde7df2e" HandleID="k8s-pod-network.4aad6bd594c4808843f7e6b5ae7b1ea6a1e41b402c15ea308fe89f81bde7df2e" Workload="ci--4081--1--0--6--0b75032dd1.novalocal-k8s-coredns--76f75df574--zsvnc-eth0" Oct 8 20:21:36.652228 containerd[1597]: 2024-10-08 20:21:36.643 [INFO][4262] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:21:36.652228 containerd[1597]: 2024-10-08 20:21:36.646 [INFO][4246] k8s.go 621: Teardown processing complete. ContainerID="4aad6bd594c4808843f7e6b5ae7b1ea6a1e41b402c15ea308fe89f81bde7df2e" Oct 8 20:21:36.653319 containerd[1597]: time="2024-10-08T20:21:36.652734900Z" level=info msg="TearDown network for sandbox \"4aad6bd594c4808843f7e6b5ae7b1ea6a1e41b402c15ea308fe89f81bde7df2e\" successfully" Oct 8 20:21:36.653319 containerd[1597]: time="2024-10-08T20:21:36.652769475Z" level=info msg="StopPodSandbox for \"4aad6bd594c4808843f7e6b5ae7b1ea6a1e41b402c15ea308fe89f81bde7df2e\" returns successfully" Oct 8 20:21:36.654479 containerd[1597]: time="2024-10-08T20:21:36.654187608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-zsvnc,Uid:69d7ca85-a209-4e6f-9a16-eb38c4d84f95,Namespace:kube-system,Attempt:1,}" Oct 8 20:21:36.686296 containerd[1597]: 2024-10-08 20:21:36.604 [INFO][4247] k8s.go 608: Cleaning up netns ContainerID="34fdef20a54d7ab59fcd701ff75013b89f11089c2d3ea96cebe455d87fa7afc9" Oct 8 20:21:36.686296 containerd[1597]: 2024-10-08 20:21:36.605 [INFO][4247] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="34fdef20a54d7ab59fcd701ff75013b89f11089c2d3ea96cebe455d87fa7afc9" iface="eth0" netns="/var/run/netns/cni-709cebbb-216e-b67d-3c66-a175e6461c2d" Oct 8 20:21:36.686296 containerd[1597]: 2024-10-08 20:21:36.606 [INFO][4247] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="34fdef20a54d7ab59fcd701ff75013b89f11089c2d3ea96cebe455d87fa7afc9" iface="eth0" netns="/var/run/netns/cni-709cebbb-216e-b67d-3c66-a175e6461c2d" Oct 8 20:21:36.686296 containerd[1597]: 2024-10-08 20:21:36.608 [INFO][4247] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="34fdef20a54d7ab59fcd701ff75013b89f11089c2d3ea96cebe455d87fa7afc9" iface="eth0" netns="/var/run/netns/cni-709cebbb-216e-b67d-3c66-a175e6461c2d" Oct 8 20:21:36.686296 containerd[1597]: 2024-10-08 20:21:36.608 [INFO][4247] k8s.go 615: Releasing IP address(es) ContainerID="34fdef20a54d7ab59fcd701ff75013b89f11089c2d3ea96cebe455d87fa7afc9" Oct 8 20:21:36.686296 containerd[1597]: 2024-10-08 20:21:36.608 [INFO][4247] utils.go 188: Calico CNI releasing IP address ContainerID="34fdef20a54d7ab59fcd701ff75013b89f11089c2d3ea96cebe455d87fa7afc9" Oct 8 20:21:36.686296 containerd[1597]: 2024-10-08 20:21:36.669 [INFO][4267] ipam_plugin.go 417: Releasing address using handleID ContainerID="34fdef20a54d7ab59fcd701ff75013b89f11089c2d3ea96cebe455d87fa7afc9" HandleID="k8s-pod-network.34fdef20a54d7ab59fcd701ff75013b89f11089c2d3ea96cebe455d87fa7afc9" Workload="ci--4081--1--0--6--0b75032dd1.novalocal-k8s-calico--kube--controllers--6644949fbd--28g75-eth0" Oct 8 20:21:36.686296 containerd[1597]: 2024-10-08 20:21:36.669 [INFO][4267] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:21:36.686296 containerd[1597]: 2024-10-08 20:21:36.669 [INFO][4267] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:21:36.686296 containerd[1597]: 2024-10-08 20:21:36.677 [WARNING][4267] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="34fdef20a54d7ab59fcd701ff75013b89f11089c2d3ea96cebe455d87fa7afc9" HandleID="k8s-pod-network.34fdef20a54d7ab59fcd701ff75013b89f11089c2d3ea96cebe455d87fa7afc9" Workload="ci--4081--1--0--6--0b75032dd1.novalocal-k8s-calico--kube--controllers--6644949fbd--28g75-eth0" Oct 8 20:21:36.686296 containerd[1597]: 2024-10-08 20:21:36.677 [INFO][4267] ipam_plugin.go 445: Releasing address using workloadID ContainerID="34fdef20a54d7ab59fcd701ff75013b89f11089c2d3ea96cebe455d87fa7afc9" HandleID="k8s-pod-network.34fdef20a54d7ab59fcd701ff75013b89f11089c2d3ea96cebe455d87fa7afc9" Workload="ci--4081--1--0--6--0b75032dd1.novalocal-k8s-calico--kube--controllers--6644949fbd--28g75-eth0" Oct 8 20:21:36.686296 containerd[1597]: 2024-10-08 20:21:36.679 [INFO][4267] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:21:36.686296 containerd[1597]: 2024-10-08 20:21:36.684 [INFO][4247] k8s.go 621: Teardown processing complete. ContainerID="34fdef20a54d7ab59fcd701ff75013b89f11089c2d3ea96cebe455d87fa7afc9" Oct 8 20:21:36.687188 containerd[1597]: time="2024-10-08T20:21:36.687034517Z" level=info msg="TearDown network for sandbox \"34fdef20a54d7ab59fcd701ff75013b89f11089c2d3ea96cebe455d87fa7afc9\" successfully" Oct 8 20:21:36.687188 containerd[1597]: time="2024-10-08T20:21:36.687066477Z" level=info msg="StopPodSandbox for \"34fdef20a54d7ab59fcd701ff75013b89f11089c2d3ea96cebe455d87fa7afc9\" returns successfully" Oct 8 20:21:36.689360 containerd[1597]: time="2024-10-08T20:21:36.689336826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6644949fbd-28g75,Uid:8cc7ecf6-b867-4caf-8b32-1704c973cd44,Namespace:calico-system,Attempt:1,}" Oct 8 20:21:36.848989 systemd-networkd[1198]: cali0a0e6af8ddf: Link UP Oct 8 20:21:36.851478 systemd-networkd[1198]: cali0a0e6af8ddf: Gained carrier Oct 8 20:21:36.871359 containerd[1597]: 2024-10-08 20:21:36.743 [INFO][4281] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--1--0--6--0b75032dd1.novalocal-k8s-coredns--76f75df574--zsvnc-eth0 coredns-76f75df574- kube-system 69d7ca85-a209-4e6f-9a16-eb38c4d84f95 736 0 2024-10-08 20:20:46 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-1-0-6-0b75032dd1.novalocal coredns-76f75df574-zsvnc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0a0e6af8ddf [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="a6ef0fd7d0b0814ccd19e32a5f1fe340553d59a63122ab41ea197feaddf73dc1" Namespace="kube-system" Pod="coredns-76f75df574-zsvnc" WorkloadEndpoint="ci--4081--1--0--6--0b75032dd1.novalocal-k8s-coredns--76f75df574--zsvnc-" Oct 8 20:21:36.871359 containerd[1597]: 2024-10-08 20:21:36.743 [INFO][4281] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a6ef0fd7d0b0814ccd19e32a5f1fe340553d59a63122ab41ea197feaddf73dc1" Namespace="kube-system" Pod="coredns-76f75df574-zsvnc" WorkloadEndpoint="ci--4081--1--0--6--0b75032dd1.novalocal-k8s-coredns--76f75df574--zsvnc-eth0" Oct 8 20:21:36.871359 containerd[1597]: 2024-10-08 20:21:36.780 [INFO][4298] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a6ef0fd7d0b0814ccd19e32a5f1fe340553d59a63122ab41ea197feaddf73dc1" HandleID="k8s-pod-network.a6ef0fd7d0b0814ccd19e32a5f1fe340553d59a63122ab41ea197feaddf73dc1" Workload="ci--4081--1--0--6--0b75032dd1.novalocal-k8s-coredns--76f75df574--zsvnc-eth0" Oct 8 20:21:36.871359 containerd[1597]: 2024-10-08 20:21:36.795 [INFO][4298] ipam_plugin.go 270: Auto assigning IP ContainerID="a6ef0fd7d0b0814ccd19e32a5f1fe340553d59a63122ab41ea197feaddf73dc1" HandleID="k8s-pod-network.a6ef0fd7d0b0814ccd19e32a5f1fe340553d59a63122ab41ea197feaddf73dc1" Workload="ci--4081--1--0--6--0b75032dd1.novalocal-k8s-coredns--76f75df574--zsvnc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5b70), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-1-0-6-0b75032dd1.novalocal", "pod":"coredns-76f75df574-zsvnc", "timestamp":"2024-10-08 20:21:36.780945555 +0000 UTC"}, Hostname:"ci-4081-1-0-6-0b75032dd1.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 20:21:36.871359 containerd[1597]: 2024-10-08 20:21:36.795 [INFO][4298] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:21:36.871359 containerd[1597]: 2024-10-08 20:21:36.795 [INFO][4298] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:21:36.871359 containerd[1597]: 2024-10-08 20:21:36.795 [INFO][4298] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-1-0-6-0b75032dd1.novalocal' Oct 8 20:21:36.871359 containerd[1597]: 2024-10-08 20:21:36.797 [INFO][4298] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a6ef0fd7d0b0814ccd19e32a5f1fe340553d59a63122ab41ea197feaddf73dc1" host="ci-4081-1-0-6-0b75032dd1.novalocal" Oct 8 20:21:36.871359 containerd[1597]: 2024-10-08 20:21:36.802 [INFO][4298] ipam.go 372: Looking up existing affinities for host host="ci-4081-1-0-6-0b75032dd1.novalocal" Oct 8 20:21:36.871359 containerd[1597]: 2024-10-08 20:21:36.808 [INFO][4298] ipam.go 489: Trying affinity for 192.168.118.64/26 host="ci-4081-1-0-6-0b75032dd1.novalocal" Oct 8 20:21:36.871359 containerd[1597]: 2024-10-08 20:21:36.811 [INFO][4298] ipam.go 155: Attempting to load block cidr=192.168.118.64/26 host="ci-4081-1-0-6-0b75032dd1.novalocal" Oct 8 20:21:36.871359 containerd[1597]: 2024-10-08 20:21:36.813 [INFO][4298] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.118.64/26 host="ci-4081-1-0-6-0b75032dd1.novalocal" Oct 8 20:21:36.871359 containerd[1597]: 2024-10-08 20:21:36.813 [INFO][4298] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.118.64/26 handle="k8s-pod-network.a6ef0fd7d0b0814ccd19e32a5f1fe340553d59a63122ab41ea197feaddf73dc1" host="ci-4081-1-0-6-0b75032dd1.novalocal" Oct 8 20:21:36.871359 containerd[1597]: 2024-10-08 20:21:36.815 [INFO][4298] ipam.go 1685: Creating new handle: k8s-pod-network.a6ef0fd7d0b0814ccd19e32a5f1fe340553d59a63122ab41ea197feaddf73dc1 Oct 8 20:21:36.871359 containerd[1597]: 2024-10-08 20:21:36.821 [INFO][4298] ipam.go 1203: Writing block in order to claim IPs block=192.168.118.64/26 handle="k8s-pod-network.a6ef0fd7d0b0814ccd19e32a5f1fe340553d59a63122ab41ea197feaddf73dc1" host="ci-4081-1-0-6-0b75032dd1.novalocal" Oct 8 20:21:36.871359 containerd[1597]: 2024-10-08 20:21:36.835 [INFO][4298] ipam.go 1216: Successfully claimed IPs: [192.168.118.66/26] block=192.168.118.64/26 handle="k8s-pod-network.a6ef0fd7d0b0814ccd19e32a5f1fe340553d59a63122ab41ea197feaddf73dc1" host="ci-4081-1-0-6-0b75032dd1.novalocal" Oct 8 20:21:36.871359 containerd[1597]: 2024-10-08 20:21:36.836 [INFO][4298] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.118.66/26] handle="k8s-pod-network.a6ef0fd7d0b0814ccd19e32a5f1fe340553d59a63122ab41ea197feaddf73dc1" host="ci-4081-1-0-6-0b75032dd1.novalocal" Oct 8 20:21:36.871359 containerd[1597]: 2024-10-08 20:21:36.836 [INFO][4298] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:21:36.871359 containerd[1597]: 2024-10-08 20:21:36.837 [INFO][4298] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.118.66/26] IPv6=[] ContainerID="a6ef0fd7d0b0814ccd19e32a5f1fe340553d59a63122ab41ea197feaddf73dc1" HandleID="k8s-pod-network.a6ef0fd7d0b0814ccd19e32a5f1fe340553d59a63122ab41ea197feaddf73dc1" Workload="ci--4081--1--0--6--0b75032dd1.novalocal-k8s-coredns--76f75df574--zsvnc-eth0" Oct 8 20:21:36.875715 containerd[1597]: 2024-10-08 20:21:36.841 [INFO][4281] k8s.go 386: Populated endpoint ContainerID="a6ef0fd7d0b0814ccd19e32a5f1fe340553d59a63122ab41ea197feaddf73dc1" Namespace="kube-system" Pod="coredns-76f75df574-zsvnc" WorkloadEndpoint="ci--4081--1--0--6--0b75032dd1.novalocal-k8s-coredns--76f75df574--zsvnc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--6--0b75032dd1.novalocal-k8s-coredns--76f75df574--zsvnc-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"69d7ca85-a209-4e6f-9a16-eb38c4d84f95", ResourceVersion:"736", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 20, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-6-0b75032dd1.novalocal", ContainerID:"", Pod:"coredns-76f75df574-zsvnc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.118.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0a0e6af8ddf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:21:36.875715 containerd[1597]: 2024-10-08 20:21:36.841 [INFO][4281] k8s.go 387: Calico CNI using IPs: [192.168.118.66/32] ContainerID="a6ef0fd7d0b0814ccd19e32a5f1fe340553d59a63122ab41ea197feaddf73dc1" Namespace="kube-system" Pod="coredns-76f75df574-zsvnc" WorkloadEndpoint="ci--4081--1--0--6--0b75032dd1.novalocal-k8s-coredns--76f75df574--zsvnc-eth0" Oct 8 20:21:36.875715 containerd[1597]: 2024-10-08 20:21:36.842 [INFO][4281] dataplane_linux.go 68: Setting the host side veth name to cali0a0e6af8ddf ContainerID="a6ef0fd7d0b0814ccd19e32a5f1fe340553d59a63122ab41ea197feaddf73dc1" Namespace="kube-system" Pod="coredns-76f75df574-zsvnc" WorkloadEndpoint="ci--4081--1--0--6--0b75032dd1.novalocal-k8s-coredns--76f75df574--zsvnc-eth0" Oct 8 20:21:36.875715 containerd[1597]: 2024-10-08 20:21:36.850 [INFO][4281] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="a6ef0fd7d0b0814ccd19e32a5f1fe340553d59a63122ab41ea197feaddf73dc1" Namespace="kube-system" Pod="coredns-76f75df574-zsvnc" WorkloadEndpoint="ci--4081--1--0--6--0b75032dd1.novalocal-k8s-coredns--76f75df574--zsvnc-eth0" Oct 8 20:21:36.875715 containerd[1597]: 2024-10-08 20:21:36.852 [INFO][4281] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a6ef0fd7d0b0814ccd19e32a5f1fe340553d59a63122ab41ea197feaddf73dc1" Namespace="kube-system" Pod="coredns-76f75df574-zsvnc" WorkloadEndpoint="ci--4081--1--0--6--0b75032dd1.novalocal-k8s-coredns--76f75df574--zsvnc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--6--0b75032dd1.novalocal-k8s-coredns--76f75df574--zsvnc-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"69d7ca85-a209-4e6f-9a16-eb38c4d84f95", ResourceVersion:"736", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 20, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-6-0b75032dd1.novalocal", ContainerID:"a6ef0fd7d0b0814ccd19e32a5f1fe340553d59a63122ab41ea197feaddf73dc1", Pod:"coredns-76f75df574-zsvnc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.118.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0a0e6af8ddf", MAC:"6a:60:a3:60:2d:d2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:21:36.875715 containerd[1597]: 2024-10-08 20:21:36.868 [INFO][4281] k8s.go 500: Wrote updated endpoint to datastore ContainerID="a6ef0fd7d0b0814ccd19e32a5f1fe340553d59a63122ab41ea197feaddf73dc1" Namespace="kube-system" Pod="coredns-76f75df574-zsvnc" WorkloadEndpoint="ci--4081--1--0--6--0b75032dd1.novalocal-k8s-coredns--76f75df574--zsvnc-eth0" Oct 8 20:21:36.920456 containerd[1597]: time="2024-10-08T20:21:36.920151965Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:21:36.920731 containerd[1597]: time="2024-10-08T20:21:36.920642690Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:21:36.920984 containerd[1597]: time="2024-10-08T20:21:36.920887291Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:21:36.922531 systemd-networkd[1198]: cali42de0926d83: Link UP Oct 8 20:21:36.923530 systemd-networkd[1198]: cali42de0926d83: Gained carrier Oct 8 20:21:36.926477 containerd[1597]: time="2024-10-08T20:21:36.924793904Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:21:36.947242 containerd[1597]: 2024-10-08 20:21:36.773 [INFO][4285] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--1--0--6--0b75032dd1.novalocal-k8s-calico--kube--controllers--6644949fbd--28g75-eth0 calico-kube-controllers-6644949fbd- calico-system 8cc7ecf6-b867-4caf-8b32-1704c973cd44 738 0 2024-10-08 20:20:53 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6644949fbd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-1-0-6-0b75032dd1.novalocal calico-kube-controllers-6644949fbd-28g75 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali42de0926d83 [] []}} ContainerID="a61b89a9d8ebff92e5d6273aae5ee90dad4d8ccb0d5632976811802590fbe7ef" Namespace="calico-system" Pod="calico-kube-controllers-6644949fbd-28g75" WorkloadEndpoint="ci--4081--1--0--6--0b75032dd1.novalocal-k8s-calico--kube--controllers--6644949fbd--28g75-" Oct 8 20:21:36.947242 containerd[1597]: 2024-10-08 20:21:36.773 [INFO][4285] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a61b89a9d8ebff92e5d6273aae5ee90dad4d8ccb0d5632976811802590fbe7ef" Namespace="calico-system" Pod="calico-kube-controllers-6644949fbd-28g75" WorkloadEndpoint="ci--4081--1--0--6--0b75032dd1.novalocal-k8s-calico--kube--controllers--6644949fbd--28g75-eth0" Oct 8 20:21:36.947242 containerd[1597]: 2024-10-08 20:21:36.831 [INFO][4306] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a61b89a9d8ebff92e5d6273aae5ee90dad4d8ccb0d5632976811802590fbe7ef" HandleID="k8s-pod-network.a61b89a9d8ebff92e5d6273aae5ee90dad4d8ccb0d5632976811802590fbe7ef" Workload="ci--4081--1--0--6--0b75032dd1.novalocal-k8s-calico--kube--controllers--6644949fbd--28g75-eth0" Oct 8 20:21:36.947242 containerd[1597]: 2024-10-08 20:21:36.857 [INFO][4306] ipam_plugin.go 270: Auto assigning IP ContainerID="a61b89a9d8ebff92e5d6273aae5ee90dad4d8ccb0d5632976811802590fbe7ef" HandleID="k8s-pod-network.a61b89a9d8ebff92e5d6273aae5ee90dad4d8ccb0d5632976811802590fbe7ef" Workload="ci--4081--1--0--6--0b75032dd1.novalocal-k8s-calico--kube--controllers--6644949fbd--28g75-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000121660), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-1-0-6-0b75032dd1.novalocal", "pod":"calico-kube-controllers-6644949fbd-28g75", "timestamp":"2024-10-08 20:21:36.831466435 +0000 UTC"}, Hostname:"ci-4081-1-0-6-0b75032dd1.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 20:21:36.947242 containerd[1597]: 2024-10-08 20:21:36.857 [INFO][4306] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:21:36.947242 containerd[1597]: 2024-10-08 20:21:36.857 [INFO][4306] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:21:36.947242 containerd[1597]: 2024-10-08 20:21:36.857 [INFO][4306] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-1-0-6-0b75032dd1.novalocal' Oct 8 20:21:36.947242 containerd[1597]: 2024-10-08 20:21:36.860 [INFO][4306] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a61b89a9d8ebff92e5d6273aae5ee90dad4d8ccb0d5632976811802590fbe7ef" host="ci-4081-1-0-6-0b75032dd1.novalocal" Oct 8 20:21:36.947242 containerd[1597]: 2024-10-08 20:21:36.876 [INFO][4306] ipam.go 372: Looking up existing affinities for host host="ci-4081-1-0-6-0b75032dd1.novalocal" Oct 8 20:21:36.947242 containerd[1597]: 2024-10-08 20:21:36.885 [INFO][4306] ipam.go 489: Trying affinity for 192.168.118.64/26 host="ci-4081-1-0-6-0b75032dd1.novalocal" Oct 8 20:21:36.947242 containerd[1597]: 2024-10-08 20:21:36.887 [INFO][4306] ipam.go 155: Attempting to load block cidr=192.168.118.64/26 host="ci-4081-1-0-6-0b75032dd1.novalocal" Oct 8 20:21:36.947242 containerd[1597]: 2024-10-08 20:21:36.890 [INFO][4306] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.118.64/26 host="ci-4081-1-0-6-0b75032dd1.novalocal" Oct 8 20:21:36.947242 containerd[1597]: 2024-10-08 20:21:36.890 [INFO][4306] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.118.64/26 handle="k8s-pod-network.a61b89a9d8ebff92e5d6273aae5ee90dad4d8ccb0d5632976811802590fbe7ef" host="ci-4081-1-0-6-0b75032dd1.novalocal" Oct 8 20:21:36.947242 containerd[1597]: 2024-10-08 20:21:36.893 [INFO][4306] ipam.go 1685: Creating new handle: k8s-pod-network.a61b89a9d8ebff92e5d6273aae5ee90dad4d8ccb0d5632976811802590fbe7ef Oct 8 20:21:36.947242 containerd[1597]: 2024-10-08 20:21:36.904 [INFO][4306] ipam.go 1203: Writing block in order to claim IPs block=192.168.118.64/26 handle="k8s-pod-network.a61b89a9d8ebff92e5d6273aae5ee90dad4d8ccb0d5632976811802590fbe7ef" host="ci-4081-1-0-6-0b75032dd1.novalocal" Oct 8 20:21:36.947242 containerd[1597]: 2024-10-08 20:21:36.914 [INFO][4306] ipam.go 1216: Successfully claimed IPs: [192.168.118.67/26] block=192.168.118.64/26 handle="k8s-pod-network.a61b89a9d8ebff92e5d6273aae5ee90dad4d8ccb0d5632976811802590fbe7ef" host="ci-4081-1-0-6-0b75032dd1.novalocal" Oct 8 20:21:36.947242 containerd[1597]: 2024-10-08 20:21:36.914 [INFO][4306] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.118.67/26] handle="k8s-pod-network.a61b89a9d8ebff92e5d6273aae5ee90dad4d8ccb0d5632976811802590fbe7ef" host="ci-4081-1-0-6-0b75032dd1.novalocal" Oct 8 20:21:36.947242 containerd[1597]: 2024-10-08 20:21:36.915 [INFO][4306] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:21:36.947242 containerd[1597]: 2024-10-08 20:21:36.915 [INFO][4306] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.118.67/26] IPv6=[] ContainerID="a61b89a9d8ebff92e5d6273aae5ee90dad4d8ccb0d5632976811802590fbe7ef" HandleID="k8s-pod-network.a61b89a9d8ebff92e5d6273aae5ee90dad4d8ccb0d5632976811802590fbe7ef" Workload="ci--4081--1--0--6--0b75032dd1.novalocal-k8s-calico--kube--controllers--6644949fbd--28g75-eth0" Oct 8 20:21:36.947913 containerd[1597]: 2024-10-08 20:21:36.918 [INFO][4285] k8s.go 386: Populated endpoint ContainerID="a61b89a9d8ebff92e5d6273aae5ee90dad4d8ccb0d5632976811802590fbe7ef" Namespace="calico-system" Pod="calico-kube-controllers-6644949fbd-28g75" WorkloadEndpoint="ci--4081--1--0--6--0b75032dd1.novalocal-k8s-calico--kube--controllers--6644949fbd--28g75-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--6--0b75032dd1.novalocal-k8s-calico--kube--controllers--6644949fbd--28g75-eth0", GenerateName:"calico-kube-controllers-6644949fbd-", Namespace:"calico-system", SelfLink:"", UID:"8cc7ecf6-b867-4caf-8b32-1704c973cd44", ResourceVersion:"738", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 20, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6644949fbd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-6-0b75032dd1.novalocal", ContainerID:"", Pod:"calico-kube-controllers-6644949fbd-28g75", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.118.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali42de0926d83", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:21:36.947913 containerd[1597]: 2024-10-08 20:21:36.918 [INFO][4285] k8s.go 387: Calico CNI using IPs: [192.168.118.67/32] ContainerID="a61b89a9d8ebff92e5d6273aae5ee90dad4d8ccb0d5632976811802590fbe7ef" Namespace="calico-system" Pod="calico-kube-controllers-6644949fbd-28g75" WorkloadEndpoint="ci--4081--1--0--6--0b75032dd1.novalocal-k8s-calico--kube--controllers--6644949fbd--28g75-eth0" Oct 8 20:21:36.947913 containerd[1597]: 2024-10-08 20:21:36.919 [INFO][4285] dataplane_linux.go 68: Setting the host side veth name to cali42de0926d83 ContainerID="a61b89a9d8ebff92e5d6273aae5ee90dad4d8ccb0d5632976811802590fbe7ef" Namespace="calico-system" Pod="calico-kube-controllers-6644949fbd-28g75" WorkloadEndpoint="ci--4081--1--0--6--0b75032dd1.novalocal-k8s-calico--kube--controllers--6644949fbd--28g75-eth0" Oct 8 20:21:36.947913 containerd[1597]: 2024-10-08 20:21:36.924 [INFO][4285] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="a61b89a9d8ebff92e5d6273aae5ee90dad4d8ccb0d5632976811802590fbe7ef" Namespace="calico-system" Pod="calico-kube-controllers-6644949fbd-28g75" WorkloadEndpoint="ci--4081--1--0--6--0b75032dd1.novalocal-k8s-calico--kube--controllers--6644949fbd--28g75-eth0" Oct 8 20:21:36.947913 containerd[1597]: 2024-10-08 20:21:36.926 [INFO][4285] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a61b89a9d8ebff92e5d6273aae5ee90dad4d8ccb0d5632976811802590fbe7ef" Namespace="calico-system" Pod="calico-kube-controllers-6644949fbd-28g75" WorkloadEndpoint="ci--4081--1--0--6--0b75032dd1.novalocal-k8s-calico--kube--controllers--6644949fbd--28g75-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--6--0b75032dd1.novalocal-k8s-calico--kube--controllers--6644949fbd--28g75-eth0", GenerateName:"calico-kube-controllers-6644949fbd-", Namespace:"calico-system", SelfLink:"", UID:"8cc7ecf6-b867-4caf-8b32-1704c973cd44", ResourceVersion:"738", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 20, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6644949fbd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-6-0b75032dd1.novalocal", ContainerID:"a61b89a9d8ebff92e5d6273aae5ee90dad4d8ccb0d5632976811802590fbe7ef", Pod:"calico-kube-controllers-6644949fbd-28g75", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.118.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali42de0926d83", MAC:"76:d6:8c:8a:8d:b0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:21:36.947913 containerd[1597]: 2024-10-08 20:21:36.943 [INFO][4285] k8s.go 500: Wrote updated endpoint to datastore ContainerID="a61b89a9d8ebff92e5d6273aae5ee90dad4d8ccb0d5632976811802590fbe7ef" Namespace="calico-system" Pod="calico-kube-controllers-6644949fbd-28g75" WorkloadEndpoint="ci--4081--1--0--6--0b75032dd1.novalocal-k8s-calico--kube--controllers--6644949fbd--28g75-eth0" Oct 8 20:21:37.025775 containerd[1597]: time="2024-10-08T20:21:37.025432538Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:21:37.025775 containerd[1597]: time="2024-10-08T20:21:37.025518320Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:21:37.025775 containerd[1597]: time="2024-10-08T20:21:37.025538949Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:21:37.025775 containerd[1597]: time="2024-10-08T20:21:37.025652994Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:21:37.041495 containerd[1597]: time="2024-10-08T20:21:37.041180365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-zsvnc,Uid:69d7ca85-a209-4e6f-9a16-eb38c4d84f95,Namespace:kube-system,Attempt:1,} returns sandbox id \"a6ef0fd7d0b0814ccd19e32a5f1fe340553d59a63122ab41ea197feaddf73dc1\"" Oct 8 20:21:37.049771 containerd[1597]: time="2024-10-08T20:21:37.049399124Z" level=info msg="CreateContainer within sandbox \"a6ef0fd7d0b0814ccd19e32a5f1fe340553d59a63122ab41ea197feaddf73dc1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 8 20:21:37.079373 containerd[1597]: time="2024-10-08T20:21:37.079226865Z" level=info msg="CreateContainer within sandbox \"a6ef0fd7d0b0814ccd19e32a5f1fe340553d59a63122ab41ea197feaddf73dc1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"dcf743b3ad062305d3ea92bd44e40f374c3f34ac7f6fc2517475c40ab12e4d14\"" Oct 8 20:21:37.080503 containerd[1597]: time="2024-10-08T20:21:37.080259572Z" level=info msg="StartContainer for \"dcf743b3ad062305d3ea92bd44e40f374c3f34ac7f6fc2517475c40ab12e4d14\"" Oct 8 20:21:37.103371 containerd[1597]: time="2024-10-08T20:21:37.103219812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6644949fbd-28g75,Uid:8cc7ecf6-b867-4caf-8b32-1704c973cd44,Namespace:calico-system,Attempt:1,} returns sandbox id \"a61b89a9d8ebff92e5d6273aae5ee90dad4d8ccb0d5632976811802590fbe7ef\"" Oct 8 20:21:37.172137 containerd[1597]: time="2024-10-08T20:21:37.171841732Z" level=info msg="StartContainer for \"dcf743b3ad062305d3ea92bd44e40f374c3f34ac7f6fc2517475c40ab12e4d14\" returns successfully" Oct 8 20:21:37.311259 systemd[1]: run-netns-cni\x2d709cebbb\x2d216e\x2db67d\x2d3c66\x2da175e6461c2d.mount: Deactivated successfully. Oct 8 20:21:37.311434 systemd[1]: run-netns-cni\x2d16ad24d7\x2d2838\x2de6bd\x2d912d\x2decad2cd0e475.mount: Deactivated successfully. Oct 8 20:21:37.399649 containerd[1597]: time="2024-10-08T20:21:37.399554006Z" level=info msg="StopPodSandbox for \"5c69c1c44bd69df7f16fa4d63e47283fcd23396e4f2de0f8bbc77d98ae66ab83\"" Oct 8 20:21:37.511672 containerd[1597]: 2024-10-08 20:21:37.468 [INFO][4469] k8s.go 608: Cleaning up netns ContainerID="5c69c1c44bd69df7f16fa4d63e47283fcd23396e4f2de0f8bbc77d98ae66ab83" Oct 8 20:21:37.511672 containerd[1597]: 2024-10-08 20:21:37.468 [INFO][4469] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="5c69c1c44bd69df7f16fa4d63e47283fcd23396e4f2de0f8bbc77d98ae66ab83" iface="eth0" netns="/var/run/netns/cni-eb24aff3-0425-d8bf-e621-ea3b747c6e6b" Oct 8 20:21:37.511672 containerd[1597]: 2024-10-08 20:21:37.468 [INFO][4469] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="5c69c1c44bd69df7f16fa4d63e47283fcd23396e4f2de0f8bbc77d98ae66ab83" iface="eth0" netns="/var/run/netns/cni-eb24aff3-0425-d8bf-e621-ea3b747c6e6b" Oct 8 20:21:37.511672 containerd[1597]: 2024-10-08 20:21:37.469 [INFO][4469] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="5c69c1c44bd69df7f16fa4d63e47283fcd23396e4f2de0f8bbc77d98ae66ab83" iface="eth0" netns="/var/run/netns/cni-eb24aff3-0425-d8bf-e621-ea3b747c6e6b" Oct 8 20:21:37.511672 containerd[1597]: 2024-10-08 20:21:37.469 [INFO][4469] k8s.go 615: Releasing IP address(es) ContainerID="5c69c1c44bd69df7f16fa4d63e47283fcd23396e4f2de0f8bbc77d98ae66ab83" Oct 8 20:21:37.511672 containerd[1597]: 2024-10-08 20:21:37.469 [INFO][4469] utils.go 188: Calico CNI releasing IP address ContainerID="5c69c1c44bd69df7f16fa4d63e47283fcd23396e4f2de0f8bbc77d98ae66ab83" Oct 8 20:21:37.511672 containerd[1597]: 2024-10-08 20:21:37.494 [INFO][4475] ipam_plugin.go 417: Releasing address using handleID ContainerID="5c69c1c44bd69df7f16fa4d63e47283fcd23396e4f2de0f8bbc77d98ae66ab83" HandleID="k8s-pod-network.5c69c1c44bd69df7f16fa4d63e47283fcd23396e4f2de0f8bbc77d98ae66ab83" Workload="ci--4081--1--0--6--0b75032dd1.novalocal-k8s-coredns--76f75df574--wdqtm-eth0" Oct 8 20:21:37.511672 containerd[1597]: 2024-10-08 20:21:37.494 [INFO][4475] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:21:37.511672 containerd[1597]: 2024-10-08 20:21:37.494 [INFO][4475] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:21:37.511672 containerd[1597]: 2024-10-08 20:21:37.501 [WARNING][4475] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="5c69c1c44bd69df7f16fa4d63e47283fcd23396e4f2de0f8bbc77d98ae66ab83" HandleID="k8s-pod-network.5c69c1c44bd69df7f16fa4d63e47283fcd23396e4f2de0f8bbc77d98ae66ab83" Workload="ci--4081--1--0--6--0b75032dd1.novalocal-k8s-coredns--76f75df574--wdqtm-eth0" Oct 8 20:21:37.511672 containerd[1597]: 2024-10-08 20:21:37.501 [INFO][4475] ipam_plugin.go 445: Releasing address using workloadID ContainerID="5c69c1c44bd69df7f16fa4d63e47283fcd23396e4f2de0f8bbc77d98ae66ab83" HandleID="k8s-pod-network.5c69c1c44bd69df7f16fa4d63e47283fcd23396e4f2de0f8bbc77d98ae66ab83" Workload="ci--4081--1--0--6--0b75032dd1.novalocal-k8s-coredns--76f75df574--wdqtm-eth0" Oct 8 20:21:37.511672 containerd[1597]: 2024-10-08 20:21:37.503 [INFO][4475] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:21:37.511672 containerd[1597]: 2024-10-08 20:21:37.506 [INFO][4469] k8s.go 621: Teardown processing complete. ContainerID="5c69c1c44bd69df7f16fa4d63e47283fcd23396e4f2de0f8bbc77d98ae66ab83" Oct 8 20:21:37.515998 containerd[1597]: time="2024-10-08T20:21:37.512059057Z" level=info msg="TearDown network for sandbox \"5c69c1c44bd69df7f16fa4d63e47283fcd23396e4f2de0f8bbc77d98ae66ab83\" successfully" Oct 8 20:21:37.515998 containerd[1597]: time="2024-10-08T20:21:37.515011812Z" level=info msg="StopPodSandbox for \"5c69c1c44bd69df7f16fa4d63e47283fcd23396e4f2de0f8bbc77d98ae66ab83\" returns successfully" Oct 8 20:21:37.517197 systemd[1]: run-netns-cni\x2deb24aff3\x2d0425\x2dd8bf\x2de621\x2dea3b747c6e6b.mount: Deactivated successfully. Oct 8 20:21:37.520284 containerd[1597]: time="2024-10-08T20:21:37.517495683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-wdqtm,Uid:2e3f14d6-abc0-4ffa-afc1-278f613cc677,Namespace:kube-system,Attempt:1,}" Oct 8 20:21:37.747237 systemd-networkd[1198]: cali56842a4e00e: Link UP Oct 8 20:21:37.749633 systemd-networkd[1198]: cali56842a4e00e: Gained carrier Oct 8 20:21:37.780723 containerd[1597]: 2024-10-08 20:21:37.638 [INFO][4483] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--1--0--6--0b75032dd1.novalocal-k8s-coredns--76f75df574--wdqtm-eth0 coredns-76f75df574- kube-system 2e3f14d6-abc0-4ffa-afc1-278f613cc677 752 0 2024-10-08 20:20:46 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-1-0-6-0b75032dd1.novalocal coredns-76f75df574-wdqtm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali56842a4e00e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="ac86c015d5d42487893bdf23f89541d6eb32494ca0e91befccc9b59f27e0f7aa" Namespace="kube-system" Pod="coredns-76f75df574-wdqtm" WorkloadEndpoint="ci--4081--1--0--6--0b75032dd1.novalocal-k8s-coredns--76f75df574--wdqtm-" Oct 8 20:21:37.780723 containerd[1597]: 2024-10-08 20:21:37.638 [INFO][4483] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ac86c015d5d42487893bdf23f89541d6eb32494ca0e91befccc9b59f27e0f7aa" Namespace="kube-system" Pod="coredns-76f75df574-wdqtm" WorkloadEndpoint="ci--4081--1--0--6--0b75032dd1.novalocal-k8s-coredns--76f75df574--wdqtm-eth0" Oct 8 20:21:37.780723 containerd[1597]: 2024-10-08 20:21:37.694 [INFO][4495] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ac86c015d5d42487893bdf23f89541d6eb32494ca0e91befccc9b59f27e0f7aa" HandleID="k8s-pod-network.ac86c015d5d42487893bdf23f89541d6eb32494ca0e91befccc9b59f27e0f7aa" Workload="ci--4081--1--0--6--0b75032dd1.novalocal-k8s-coredns--76f75df574--wdqtm-eth0" Oct 8 20:21:37.780723 containerd[1597]: 2024-10-08 20:21:37.706 [INFO][4495] ipam_plugin.go 270: Auto assigning IP ContainerID="ac86c015d5d42487893bdf23f89541d6eb32494ca0e91befccc9b59f27e0f7aa" HandleID="k8s-pod-network.ac86c015d5d42487893bdf23f89541d6eb32494ca0e91befccc9b59f27e0f7aa" Workload="ci--4081--1--0--6--0b75032dd1.novalocal-k8s-coredns--76f75df574--wdqtm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290f60), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-1-0-6-0b75032dd1.novalocal", "pod":"coredns-76f75df574-wdqtm", "timestamp":"2024-10-08 20:21:37.693991747 +0000 UTC"}, Hostname:"ci-4081-1-0-6-0b75032dd1.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 20:21:37.780723 containerd[1597]: 2024-10-08 20:21:37.706 [INFO][4495] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:21:37.780723 containerd[1597]: 2024-10-08 20:21:37.706 [INFO][4495] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:21:37.780723 containerd[1597]: 2024-10-08 20:21:37.706 [INFO][4495] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-1-0-6-0b75032dd1.novalocal' Oct 8 20:21:37.780723 containerd[1597]: 2024-10-08 20:21:37.708 [INFO][4495] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ac86c015d5d42487893bdf23f89541d6eb32494ca0e91befccc9b59f27e0f7aa" host="ci-4081-1-0-6-0b75032dd1.novalocal" Oct 8 20:21:37.780723 containerd[1597]: 2024-10-08 20:21:37.712 [INFO][4495] ipam.go 372: Looking up existing affinities for host host="ci-4081-1-0-6-0b75032dd1.novalocal" Oct 8 20:21:37.780723 containerd[1597]: 2024-10-08 20:21:37.717 [INFO][4495] ipam.go 489: Trying affinity for 192.168.118.64/26 host="ci-4081-1-0-6-0b75032dd1.novalocal" Oct 8 20:21:37.780723 containerd[1597]: 2024-10-08 20:21:37.719 [INFO][4495] ipam.go 155: Attempting to load block cidr=192.168.118.64/26 host="ci-4081-1-0-6-0b75032dd1.novalocal" Oct 8 20:21:37.780723 containerd[1597]: 2024-10-08 20:21:37.723 [INFO][4495] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.118.64/26 host="ci-4081-1-0-6-0b75032dd1.novalocal" Oct 8 20:21:37.780723 containerd[1597]: 2024-10-08 20:21:37.723 [INFO][4495] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.118.64/26 handle="k8s-pod-network.ac86c015d5d42487893bdf23f89541d6eb32494ca0e91befccc9b59f27e0f7aa" host="ci-4081-1-0-6-0b75032dd1.novalocal" Oct 8 20:21:37.780723 containerd[1597]: 2024-10-08 20:21:37.725 [INFO][4495] ipam.go 1685: Creating new handle: k8s-pod-network.ac86c015d5d42487893bdf23f89541d6eb32494ca0e91befccc9b59f27e0f7aa Oct 8 20:21:37.780723 containerd[1597]: 2024-10-08 20:21:37.732 [INFO][4495] ipam.go 1203: Writing block in order to claim IPs block=192.168.118.64/26 handle="k8s-pod-network.ac86c015d5d42487893bdf23f89541d6eb32494ca0e91befccc9b59f27e0f7aa" host="ci-4081-1-0-6-0b75032dd1.novalocal" Oct 8 20:21:37.780723 containerd[1597]: 2024-10-08 20:21:37.739 [INFO][4495] ipam.go 1216: Successfully claimed IPs: [192.168.118.68/26] block=192.168.118.64/26 handle="k8s-pod-network.ac86c015d5d42487893bdf23f89541d6eb32494ca0e91befccc9b59f27e0f7aa" host="ci-4081-1-0-6-0b75032dd1.novalocal" Oct 8 20:21:37.780723 containerd[1597]: 2024-10-08 20:21:37.740 [INFO][4495] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.118.68/26] handle="k8s-pod-network.ac86c015d5d42487893bdf23f89541d6eb32494ca0e91befccc9b59f27e0f7aa" host="ci-4081-1-0-6-0b75032dd1.novalocal" Oct 8 20:21:37.780723 containerd[1597]: 2024-10-08 20:21:37.740 [INFO][4495] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:21:37.780723 containerd[1597]: 2024-10-08 20:21:37.740 [INFO][4495] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.118.68/26] IPv6=[] ContainerID="ac86c015d5d42487893bdf23f89541d6eb32494ca0e91befccc9b59f27e0f7aa" HandleID="k8s-pod-network.ac86c015d5d42487893bdf23f89541d6eb32494ca0e91befccc9b59f27e0f7aa" Workload="ci--4081--1--0--6--0b75032dd1.novalocal-k8s-coredns--76f75df574--wdqtm-eth0" Oct 8 20:21:37.784914 containerd[1597]: 2024-10-08 20:21:37.742 [INFO][4483] k8s.go 386: Populated endpoint ContainerID="ac86c015d5d42487893bdf23f89541d6eb32494ca0e91befccc9b59f27e0f7aa" Namespace="kube-system" Pod="coredns-76f75df574-wdqtm" WorkloadEndpoint="ci--4081--1--0--6--0b75032dd1.novalocal-k8s-coredns--76f75df574--wdqtm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--6--0b75032dd1.novalocal-k8s-coredns--76f75df574--wdqtm-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"2e3f14d6-abc0-4ffa-afc1-278f613cc677", ResourceVersion:"752", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 20, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-6-0b75032dd1.novalocal", ContainerID:"", Pod:"coredns-76f75df574-wdqtm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.118.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali56842a4e00e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:21:37.784914 containerd[1597]: 2024-10-08 20:21:37.742 [INFO][4483] k8s.go 387: Calico CNI using IPs: [192.168.118.68/32] ContainerID="ac86c015d5d42487893bdf23f89541d6eb32494ca0e91befccc9b59f27e0f7aa" Namespace="kube-system" Pod="coredns-76f75df574-wdqtm" WorkloadEndpoint="ci--4081--1--0--6--0b75032dd1.novalocal-k8s-coredns--76f75df574--wdqtm-eth0" Oct 8 20:21:37.784914 containerd[1597]: 2024-10-08 20:21:37.743 [INFO][4483] dataplane_linux.go 68: Setting the host side veth name to cali56842a4e00e ContainerID="ac86c015d5d42487893bdf23f89541d6eb32494ca0e91befccc9b59f27e0f7aa" Namespace="kube-system" Pod="coredns-76f75df574-wdqtm" WorkloadEndpoint="ci--4081--1--0--6--0b75032dd1.novalocal-k8s-coredns--76f75df574--wdqtm-eth0" Oct 8 20:21:37.784914 containerd[1597]: 2024-10-08 20:21:37.749 [INFO][4483] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="ac86c015d5d42487893bdf23f89541d6eb32494ca0e91befccc9b59f27e0f7aa" Namespace="kube-system" Pod="coredns-76f75df574-wdqtm" WorkloadEndpoint="ci--4081--1--0--6--0b75032dd1.novalocal-k8s-coredns--76f75df574--wdqtm-eth0" Oct 8 20:21:37.784914 containerd[1597]: 2024-10-08 20:21:37.750 [INFO][4483] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ac86c015d5d42487893bdf23f89541d6eb32494ca0e91befccc9b59f27e0f7aa" Namespace="kube-system" Pod="coredns-76f75df574-wdqtm" WorkloadEndpoint="ci--4081--1--0--6--0b75032dd1.novalocal-k8s-coredns--76f75df574--wdqtm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--6--0b75032dd1.novalocal-k8s-coredns--76f75df574--wdqtm-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"2e3f14d6-abc0-4ffa-afc1-278f613cc677", ResourceVersion:"752", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 20, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-6-0b75032dd1.novalocal", ContainerID:"ac86c015d5d42487893bdf23f89541d6eb32494ca0e91befccc9b59f27e0f7aa", Pod:"coredns-76f75df574-wdqtm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.118.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali56842a4e00e", MAC:"d2:63:7e:c9:e9:6a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:21:37.784914 containerd[1597]: 2024-10-08 20:21:37.773 [INFO][4483] k8s.go 500: Wrote updated endpoint to datastore ContainerID="ac86c015d5d42487893bdf23f89541d6eb32494ca0e91befccc9b59f27e0f7aa" Namespace="kube-system" Pod="coredns-76f75df574-wdqtm" WorkloadEndpoint="ci--4081--1--0--6--0b75032dd1.novalocal-k8s-coredns--76f75df574--wdqtm-eth0" Oct 8 20:21:37.813052 containerd[1597]: time="2024-10-08T20:21:37.812716159Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:21:37.813052 containerd[1597]: time="2024-10-08T20:21:37.812800909Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:21:37.813052 containerd[1597]: time="2024-10-08T20:21:37.812820656Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:21:37.813390 containerd[1597]: time="2024-10-08T20:21:37.812926996Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:21:37.825305 kubelet[2889]: I1008 20:21:37.822722 2889 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-zsvnc" podStartSLOduration=51.822670057 podStartE2EDuration="51.822670057s" podCreationTimestamp="2024-10-08 20:20:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 20:21:37.821787975 +0000 UTC m=+65.765737967" watchObservedRunningTime="2024-10-08 20:21:37.822670057 +0000 UTC m=+65.766620039" Oct 8 20:21:37.918566 containerd[1597]: time="2024-10-08T20:21:37.918485986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-wdqtm,Uid:2e3f14d6-abc0-4ffa-afc1-278f613cc677,Namespace:kube-system,Attempt:1,} returns sandbox id \"ac86c015d5d42487893bdf23f89541d6eb32494ca0e91befccc9b59f27e0f7aa\"" Oct 8 20:21:37.931185 containerd[1597]: time="2024-10-08T20:21:37.931147458Z" level=info msg="CreateContainer within sandbox \"ac86c015d5d42487893bdf23f89541d6eb32494ca0e91befccc9b59f27e0f7aa\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 8 20:21:37.965166 containerd[1597]: time="2024-10-08T20:21:37.965063724Z" level=info msg="CreateContainer within sandbox \"ac86c015d5d42487893bdf23f89541d6eb32494ca0e91befccc9b59f27e0f7aa\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f4b09462d0aee3de2370a0867032cce397664d0589b43e483e003c83ac63d2ad\"" Oct 8 20:21:37.967413 containerd[1597]: time="2024-10-08T20:21:37.966300055Z" level=info msg="StartContainer for \"f4b09462d0aee3de2370a0867032cce397664d0589b43e483e003c83ac63d2ad\"" Oct 8 20:21:38.036885 containerd[1597]: time="2024-10-08T20:21:38.036681294Z" level=info msg="StartContainer for \"f4b09462d0aee3de2370a0867032cce397664d0589b43e483e003c83ac63d2ad\" returns successfully" Oct 8 20:21:38.825139 systemd-networkd[1198]: cali42de0926d83: Gained IPv6LL Oct 8 20:21:38.875868 kubelet[2889]: I1008 20:21:38.875826 2889 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-wdqtm" podStartSLOduration=52.875443129 podStartE2EDuration="52.875443129s" podCreationTimestamp="2024-10-08 20:20:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 20:21:38.854570067 +0000 UTC m=+66.798520049" watchObservedRunningTime="2024-10-08 20:21:38.875443129 +0000 UTC m=+66.819393111" Oct 8 20:21:38.890150 systemd-networkd[1198]: cali56842a4e00e: Gained IPv6LL Oct 8 20:21:38.890475 systemd-networkd[1198]: cali0a0e6af8ddf: Gained IPv6LL Oct 8 20:21:38.950599 containerd[1597]: time="2024-10-08T20:21:38.950560255Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:21:38.952540 containerd[1597]: time="2024-10-08T20:21:38.952502927Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1: active requests=0, bytes read=12907822" Oct 8 20:21:38.954209 containerd[1597]: time="2024-10-08T20:21:38.954156061Z" level=info msg="ImageCreate event name:\"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:21:38.960538 containerd[1597]: time="2024-10-08T20:21:38.960511467Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:21:38.961625 containerd[1597]: time="2024-10-08T20:21:38.961489530Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" with image id \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\", size \"14400175\" in 2.396843265s" Oct 8 20:21:38.961625 containerd[1597]: time="2024-10-08T20:21:38.961537561Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" returns image reference \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\"" Oct 8 20:21:38.964330 containerd[1597]: time="2024-10-08T20:21:38.964184889Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\"" Oct 8 20:21:38.966592 containerd[1597]: time="2024-10-08T20:21:38.966348326Z" level=info msg="CreateContainer within sandbox \"be276f276decd07f604d09bebaa98629b68245166544867f68c2050714b3289c\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Oct 8 20:21:38.992848 containerd[1597]: time="2024-10-08T20:21:38.992182656Z" level=info msg="CreateContainer within sandbox \"be276f276decd07f604d09bebaa98629b68245166544867f68c2050714b3289c\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"1f9e379a0d4e0da60874b4b064ca8ff63b2a515a59923f600da00fae0cfcf3c6\"" Oct 8 20:21:38.993070 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount715052588.mount: Deactivated successfully. Oct 8 20:21:38.997630 containerd[1597]: time="2024-10-08T20:21:38.994280400Z" level=info msg="StartContainer for \"1f9e379a0d4e0da60874b4b064ca8ff63b2a515a59923f600da00fae0cfcf3c6\"" Oct 8 20:21:39.064533 containerd[1597]: time="2024-10-08T20:21:39.064490977Z" level=info msg="StartContainer for \"1f9e379a0d4e0da60874b4b064ca8ff63b2a515a59923f600da00fae0cfcf3c6\" returns successfully" Oct 8 20:21:39.766990 kubelet[2889]: I1008 20:21:39.766502 2889 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Oct 8 20:21:39.785088 kubelet[2889]: I1008 20:21:39.784932 2889 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Oct 8 20:21:39.890152 kubelet[2889]: I1008 20:21:39.890075 2889 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-cbc7k" podStartSLOduration=42.13734299 podStartE2EDuration="46.890029024s" podCreationTimestamp="2024-10-08 20:20:53 +0000 UTC" firstStartedPulling="2024-10-08 20:21:34.209259043 +0000 UTC m=+62.153209026" lastFinishedPulling="2024-10-08 20:21:38.961945068 +0000 UTC m=+66.905895060" observedRunningTime="2024-10-08 20:21:39.888035939 +0000 UTC m=+67.831985932" watchObservedRunningTime="2024-10-08 20:21:39.890029024 +0000 UTC m=+67.833979017" Oct 8 20:21:40.299330 systemd-journald[1119]: Under memory pressure, flushing caches. Oct 8 20:21:40.298104 systemd-resolved[1471]: Under memory pressure, flushing caches. Oct 8 20:21:40.298163 systemd-resolved[1471]: Flushed all caches. Oct 8 20:21:42.346600 systemd-journald[1119]: Under memory pressure, flushing caches. Oct 8 20:21:42.346493 systemd-resolved[1471]: Under memory pressure, flushing caches. Oct 8 20:21:42.346502 systemd-resolved[1471]: Flushed all caches. Oct 8 20:21:43.199422 containerd[1597]: time="2024-10-08T20:21:43.199317111Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:21:43.202986 containerd[1597]: time="2024-10-08T20:21:43.201890138Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.1: active requests=0, bytes read=33507125" Oct 8 20:21:43.206891 containerd[1597]: time="2024-10-08T20:21:43.206464393Z" level=info msg="ImageCreate event name:\"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:21:43.212708 containerd[1597]: time="2024-10-08T20:21:43.212573229Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:21:43.244356 containerd[1597]: time="2024-10-08T20:21:43.240046011Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" with image id \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\", size \"34999494\" in 4.275726719s" Oct 8 20:21:43.244356 containerd[1597]: time="2024-10-08T20:21:43.240156159Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" returns image reference \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\"" Oct 8 20:21:43.287033 containerd[1597]: time="2024-10-08T20:21:43.286999996Z" level=info msg="CreateContainer within sandbox \"a61b89a9d8ebff92e5d6273aae5ee90dad4d8ccb0d5632976811802590fbe7ef\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Oct 8 20:21:43.309525 containerd[1597]: time="2024-10-08T20:21:43.309462953Z" level=info msg="CreateContainer within sandbox \"a61b89a9d8ebff92e5d6273aae5ee90dad4d8ccb0d5632976811802590fbe7ef\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"b204bbd54d4ba5e203f717ba9c83bb2ce635f1edf19b2ffd55719e69c7b93836\"" Oct 8 20:21:43.311364 containerd[1597]: time="2024-10-08T20:21:43.310178431Z" level=info msg="StartContainer for \"b204bbd54d4ba5e203f717ba9c83bb2ce635f1edf19b2ffd55719e69c7b93836\"" Oct 8 20:21:43.397988 containerd[1597]: time="2024-10-08T20:21:43.397917751Z" level=info msg="StartContainer for \"b204bbd54d4ba5e203f717ba9c83bb2ce635f1edf19b2ffd55719e69c7b93836\" returns successfully" Oct 8 20:21:43.995655 kubelet[2889]: I1008 20:21:43.995610 2889 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6644949fbd-28g75" podStartSLOduration=44.860123951 podStartE2EDuration="50.995553706s" podCreationTimestamp="2024-10-08 20:20:53 +0000 UTC" firstStartedPulling="2024-10-08 20:21:37.107785906 +0000 UTC m=+65.051735888" lastFinishedPulling="2024-10-08 20:21:43.243215611 +0000 UTC m=+71.187165643" observedRunningTime="2024-10-08 20:21:43.905591889 +0000 UTC m=+71.849541881" watchObservedRunningTime="2024-10-08 20:21:43.995553706 +0000 UTC m=+71.939503688" Oct 8 20:21:45.069500 systemd[1]: Started sshd@9-172.24.4.55:22-172.24.4.1:53566.service - OpenSSH per-connection server daemon (172.24.4.1:53566). Oct 8 20:21:46.644617 sshd[4764]: Accepted publickey for core from 172.24.4.1 port 53566 ssh2: RSA SHA256:N4tAxOYyt600zP8LzVHN9krjQqk3csZTCmZq/eMm2uA Oct 8 20:21:46.648113 sshd[4764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:21:46.660910 systemd-logind[1572]: New session 12 of user core. Oct 8 20:21:46.669604 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 8 20:21:47.950351 sshd[4764]: pam_unix(sshd:session): session closed for user core Oct 8 20:21:47.960610 systemd[1]: sshd@9-172.24.4.55:22-172.24.4.1:53566.service: Deactivated successfully. Oct 8 20:21:47.970385 systemd-logind[1572]: Session 12 logged out. Waiting for processes to exit. Oct 8 20:21:47.971462 systemd[1]: session-12.scope: Deactivated successfully. Oct 8 20:21:47.977495 systemd-logind[1572]: Removed session 12. Oct 8 20:21:48.297410 systemd-resolved[1471]: Under memory pressure, flushing caches. Oct 8 20:21:48.300499 systemd-journald[1119]: Under memory pressure, flushing caches. Oct 8 20:21:48.297452 systemd-resolved[1471]: Flushed all caches. Oct 8 20:21:52.963526 systemd[1]: Started sshd@10-172.24.4.55:22-172.24.4.1:53578.service - OpenSSH per-connection server daemon (172.24.4.1:53578). Oct 8 20:21:54.304855 sshd[4802]: Accepted publickey for core from 172.24.4.1 port 53578 ssh2: RSA SHA256:N4tAxOYyt600zP8LzVHN9krjQqk3csZTCmZq/eMm2uA Oct 8 20:21:54.306774 sshd[4802]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:21:54.320383 systemd-logind[1572]: New session 13 of user core. Oct 8 20:21:54.325281 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 8 20:21:55.597522 systemd[1]: run-containerd-runc-k8s.io-b204bbd54d4ba5e203f717ba9c83bb2ce635f1edf19b2ffd55719e69c7b93836-runc.IHqvL6.mount: Deactivated successfully. Oct 8 20:21:55.624785 sshd[4802]: pam_unix(sshd:session): session closed for user core Oct 8 20:21:55.646253 systemd[1]: sshd@10-172.24.4.55:22-172.24.4.1:53578.service: Deactivated successfully. Oct 8 20:21:55.656063 systemd[1]: session-13.scope: Deactivated successfully. Oct 8 20:21:55.658812 systemd-logind[1572]: Session 13 logged out. Waiting for processes to exit. Oct 8 20:21:55.661360 systemd-logind[1572]: Removed session 13. Oct 8 20:21:58.281044 systemd-resolved[1471]: Under memory pressure, flushing caches. Oct 8 20:21:58.281064 systemd-resolved[1471]: Flushed all caches. Oct 8 20:21:58.282981 systemd-journald[1119]: Under memory pressure, flushing caches. Oct 8 20:22:00.638538 systemd[1]: Started sshd@11-172.24.4.55:22-172.24.4.1:36818.service - OpenSSH per-connection server daemon (172.24.4.1:36818). Oct 8 20:22:01.849662 sshd[4850]: Accepted publickey for core from 172.24.4.1 port 36818 ssh2: RSA SHA256:N4tAxOYyt600zP8LzVHN9krjQqk3csZTCmZq/eMm2uA Oct 8 20:22:01.852733 sshd[4850]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:22:01.867095 systemd-logind[1572]: New session 14 of user core. Oct 8 20:22:01.873623 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 8 20:22:03.083453 systemd[1]: Started sshd@12-172.24.4.55:22-172.24.4.1:36834.service - OpenSSH per-connection server daemon (172.24.4.1:36834). Oct 8 20:22:03.125073 sshd[4850]: pam_unix(sshd:session): session closed for user core Oct 8 20:22:03.131585 systemd[1]: sshd@11-172.24.4.55:22-172.24.4.1:36818.service: Deactivated successfully. Oct 8 20:22:03.140216 systemd-logind[1572]: Session 14 logged out. Waiting for processes to exit. Oct 8 20:22:03.141490 systemd[1]: session-14.scope: Deactivated successfully. Oct 8 20:22:03.147773 systemd-logind[1572]: Removed session 14. Oct 8 20:22:04.376251 sshd[4862]: Accepted publickey for core from 172.24.4.1 port 36834 ssh2: RSA SHA256:N4tAxOYyt600zP8LzVHN9krjQqk3csZTCmZq/eMm2uA Oct 8 20:22:04.379189 sshd[4862]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:22:04.391412 systemd-logind[1572]: New session 15 of user core. Oct 8 20:22:04.397787 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 8 20:22:06.029871 sshd[4862]: pam_unix(sshd:session): session closed for user core Oct 8 20:22:06.040374 systemd[1]: Started sshd@13-172.24.4.55:22-172.24.4.1:52182.service - OpenSSH per-connection server daemon (172.24.4.1:52182). Oct 8 20:22:06.061053 systemd[1]: sshd@12-172.24.4.55:22-172.24.4.1:36834.service: Deactivated successfully. Oct 8 20:22:06.076745 systemd[1]: session-15.scope: Deactivated successfully. Oct 8 20:22:06.080924 systemd-logind[1572]: Session 15 logged out. Waiting for processes to exit. Oct 8 20:22:06.085528 systemd-logind[1572]: Removed session 15. Oct 8 20:22:07.551785 sshd[4879]: Accepted publickey for core from 172.24.4.1 port 52182 ssh2: RSA SHA256:N4tAxOYyt600zP8LzVHN9krjQqk3csZTCmZq/eMm2uA Oct 8 20:22:07.555040 sshd[4879]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:22:07.567729 systemd-logind[1572]: New session 16 of user core. Oct 8 20:22:07.574067 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 8 20:22:08.332914 systemd-journald[1119]: Under memory pressure, flushing caches. Oct 8 20:22:08.330652 systemd-resolved[1471]: Under memory pressure, flushing caches. Oct 8 20:22:08.330685 systemd-resolved[1471]: Flushed all caches. Oct 8 20:22:08.360167 sshd[4879]: pam_unix(sshd:session): session closed for user core Oct 8 20:22:08.375383 systemd[1]: sshd@13-172.24.4.55:22-172.24.4.1:52182.service: Deactivated successfully. Oct 8 20:22:08.386296 systemd[1]: session-16.scope: Deactivated successfully. Oct 8 20:22:08.389673 systemd-logind[1572]: Session 16 logged out. Waiting for processes to exit. Oct 8 20:22:08.393778 systemd-logind[1572]: Removed session 16. Oct 8 20:22:13.372536 systemd[1]: Started sshd@14-172.24.4.55:22-172.24.4.1:52192.service - OpenSSH per-connection server daemon (172.24.4.1:52192). Oct 8 20:22:14.523034 sshd[4924]: Accepted publickey for core from 172.24.4.1 port 52192 ssh2: RSA SHA256:N4tAxOYyt600zP8LzVHN9krjQqk3csZTCmZq/eMm2uA Oct 8 20:22:14.525591 sshd[4924]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:22:14.532110 systemd-logind[1572]: New session 17 of user core. Oct 8 20:22:14.540538 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 8 20:22:15.437471 sshd[4924]: pam_unix(sshd:session): session closed for user core Oct 8 20:22:15.443623 systemd[1]: sshd@14-172.24.4.55:22-172.24.4.1:52192.service: Deactivated successfully. Oct 8 20:22:15.452776 systemd[1]: session-17.scope: Deactivated successfully. Oct 8 20:22:15.457667 systemd-logind[1572]: Session 17 logged out. Waiting for processes to exit. Oct 8 20:22:15.460407 systemd-logind[1572]: Removed session 17. Oct 8 20:22:17.861006 kubelet[2889]: I1008 20:22:17.860923 2889 topology_manager.go:215] "Topology Admit Handler" podUID="e635e37c-a43a-465f-82fd-1d4c492d4544" podNamespace="calico-apiserver" podName="calico-apiserver-546495bb68-x8nmf" Oct 8 20:22:18.022882 kubelet[2889]: I1008 20:22:18.022226 2889 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cc2cd\" (UniqueName: \"kubernetes.io/projected/e635e37c-a43a-465f-82fd-1d4c492d4544-kube-api-access-cc2cd\") pod \"calico-apiserver-546495bb68-x8nmf\" (UID: \"e635e37c-a43a-465f-82fd-1d4c492d4544\") " pod="calico-apiserver/calico-apiserver-546495bb68-x8nmf" Oct 8 20:22:18.024807 kubelet[2889]: I1008 20:22:18.024515 2889 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e635e37c-a43a-465f-82fd-1d4c492d4544-calico-apiserver-certs\") pod \"calico-apiserver-546495bb68-x8nmf\" (UID: \"e635e37c-a43a-465f-82fd-1d4c492d4544\") " pod="calico-apiserver/calico-apiserver-546495bb68-x8nmf" Oct 8 20:22:18.210262 containerd[1597]: time="2024-10-08T20:22:18.210192632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-546495bb68-x8nmf,Uid:e635e37c-a43a-465f-82fd-1d4c492d4544,Namespace:calico-apiserver,Attempt:0,}" Oct 8 20:22:18.463081 systemd-networkd[1198]: cali2ef7780fc89: Link UP Oct 8 20:22:18.463275 systemd-networkd[1198]: cali2ef7780fc89: Gained carrier Oct 8 20:22:18.485148 containerd[1597]: 2024-10-08 20:22:18.338 [INFO][4960] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--1--0--6--0b75032dd1.novalocal-k8s-calico--apiserver--546495bb68--x8nmf-eth0 calico-apiserver-546495bb68- calico-apiserver e635e37c-a43a-465f-82fd-1d4c492d4544 1042 0 2024-10-08 20:22:17 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:546495bb68 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-1-0-6-0b75032dd1.novalocal calico-apiserver-546495bb68-x8nmf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2ef7780fc89 [] []}} ContainerID="09eb64996182e1dcda4f4e84d9c56f88db6c9b6567a2190e79eb9945f68f0f32" Namespace="calico-apiserver" Pod="calico-apiserver-546495bb68-x8nmf" WorkloadEndpoint="ci--4081--1--0--6--0b75032dd1.novalocal-k8s-calico--apiserver--546495bb68--x8nmf-" Oct 8 20:22:18.485148 containerd[1597]: 2024-10-08 20:22:18.338 [INFO][4960] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="09eb64996182e1dcda4f4e84d9c56f88db6c9b6567a2190e79eb9945f68f0f32" Namespace="calico-apiserver" Pod="calico-apiserver-546495bb68-x8nmf" WorkloadEndpoint="ci--4081--1--0--6--0b75032dd1.novalocal-k8s-calico--apiserver--546495bb68--x8nmf-eth0" Oct 8 20:22:18.485148 containerd[1597]: 2024-10-08 20:22:18.395 [INFO][4970] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="09eb64996182e1dcda4f4e84d9c56f88db6c9b6567a2190e79eb9945f68f0f32" HandleID="k8s-pod-network.09eb64996182e1dcda4f4e84d9c56f88db6c9b6567a2190e79eb9945f68f0f32" Workload="ci--4081--1--0--6--0b75032dd1.novalocal-k8s-calico--apiserver--546495bb68--x8nmf-eth0" Oct 8 20:22:18.485148 containerd[1597]: 2024-10-08 20:22:18.409 [INFO][4970] ipam_plugin.go 270: Auto assigning IP ContainerID="09eb64996182e1dcda4f4e84d9c56f88db6c9b6567a2190e79eb9945f68f0f32" HandleID="k8s-pod-network.09eb64996182e1dcda4f4e84d9c56f88db6c9b6567a2190e79eb9945f68f0f32" Workload="ci--4081--1--0--6--0b75032dd1.novalocal-k8s-calico--apiserver--546495bb68--x8nmf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318830), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-1-0-6-0b75032dd1.novalocal", "pod":"calico-apiserver-546495bb68-x8nmf", "timestamp":"2024-10-08 20:22:18.395265884 +0000 UTC"}, Hostname:"ci-4081-1-0-6-0b75032dd1.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 20:22:18.485148 containerd[1597]: 2024-10-08 20:22:18.409 [INFO][4970] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:22:18.485148 containerd[1597]: 2024-10-08 20:22:18.409 [INFO][4970] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:22:18.485148 containerd[1597]: 2024-10-08 20:22:18.409 [INFO][4970] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-1-0-6-0b75032dd1.novalocal' Oct 8 20:22:18.485148 containerd[1597]: 2024-10-08 20:22:18.412 [INFO][4970] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.09eb64996182e1dcda4f4e84d9c56f88db6c9b6567a2190e79eb9945f68f0f32" host="ci-4081-1-0-6-0b75032dd1.novalocal" Oct 8 20:22:18.485148 containerd[1597]: 2024-10-08 20:22:18.417 [INFO][4970] ipam.go 372: Looking up existing affinities for host host="ci-4081-1-0-6-0b75032dd1.novalocal" Oct 8 20:22:18.485148 containerd[1597]: 2024-10-08 20:22:18.424 [INFO][4970] ipam.go 489: Trying affinity for 192.168.118.64/26 host="ci-4081-1-0-6-0b75032dd1.novalocal" Oct 8 20:22:18.485148 containerd[1597]: 2024-10-08 20:22:18.427 [INFO][4970] ipam.go 155: Attempting to load block cidr=192.168.118.64/26 host="ci-4081-1-0-6-0b75032dd1.novalocal" Oct 8 20:22:18.485148 containerd[1597]: 2024-10-08 20:22:18.436 [INFO][4970] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.118.64/26 host="ci-4081-1-0-6-0b75032dd1.novalocal" Oct 8 20:22:18.485148 containerd[1597]: 2024-10-08 20:22:18.436 [INFO][4970] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.118.64/26 handle="k8s-pod-network.09eb64996182e1dcda4f4e84d9c56f88db6c9b6567a2190e79eb9945f68f0f32" host="ci-4081-1-0-6-0b75032dd1.novalocal" Oct 8 20:22:18.485148 containerd[1597]: 2024-10-08 20:22:18.439 [INFO][4970] ipam.go 1685: Creating new handle: k8s-pod-network.09eb64996182e1dcda4f4e84d9c56f88db6c9b6567a2190e79eb9945f68f0f32 Oct 8 20:22:18.485148 containerd[1597]: 2024-10-08 20:22:18.446 [INFO][4970] ipam.go 1203: Writing block in order to claim IPs block=192.168.118.64/26 handle="k8s-pod-network.09eb64996182e1dcda4f4e84d9c56f88db6c9b6567a2190e79eb9945f68f0f32" host="ci-4081-1-0-6-0b75032dd1.novalocal" Oct 8 20:22:18.485148 containerd[1597]: 2024-10-08 20:22:18.453 [INFO][4970] ipam.go 1216: Successfully claimed IPs: [192.168.118.69/26] block=192.168.118.64/26 handle="k8s-pod-network.09eb64996182e1dcda4f4e84d9c56f88db6c9b6567a2190e79eb9945f68f0f32" host="ci-4081-1-0-6-0b75032dd1.novalocal" Oct 8 20:22:18.485148 containerd[1597]: 2024-10-08 20:22:18.453 [INFO][4970] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.118.69/26] handle="k8s-pod-network.09eb64996182e1dcda4f4e84d9c56f88db6c9b6567a2190e79eb9945f68f0f32" host="ci-4081-1-0-6-0b75032dd1.novalocal" Oct 8 20:22:18.485148 containerd[1597]: 2024-10-08 20:22:18.453 [INFO][4970] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:22:18.485148 containerd[1597]: 2024-10-08 20:22:18.453 [INFO][4970] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.118.69/26] IPv6=[] ContainerID="09eb64996182e1dcda4f4e84d9c56f88db6c9b6567a2190e79eb9945f68f0f32" HandleID="k8s-pod-network.09eb64996182e1dcda4f4e84d9c56f88db6c9b6567a2190e79eb9945f68f0f32" Workload="ci--4081--1--0--6--0b75032dd1.novalocal-k8s-calico--apiserver--546495bb68--x8nmf-eth0" Oct 8 20:22:18.490493 containerd[1597]: 2024-10-08 20:22:18.457 [INFO][4960] k8s.go 386: Populated endpoint ContainerID="09eb64996182e1dcda4f4e84d9c56f88db6c9b6567a2190e79eb9945f68f0f32" Namespace="calico-apiserver" Pod="calico-apiserver-546495bb68-x8nmf" WorkloadEndpoint="ci--4081--1--0--6--0b75032dd1.novalocal-k8s-calico--apiserver--546495bb68--x8nmf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--6--0b75032dd1.novalocal-k8s-calico--apiserver--546495bb68--x8nmf-eth0", GenerateName:"calico-apiserver-546495bb68-", Namespace:"calico-apiserver", SelfLink:"", UID:"e635e37c-a43a-465f-82fd-1d4c492d4544", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 22, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"546495bb68", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-6-0b75032dd1.novalocal", ContainerID:"", Pod:"calico-apiserver-546495bb68-x8nmf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.118.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2ef7780fc89", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:22:18.490493 containerd[1597]: 2024-10-08 20:22:18.458 [INFO][4960] k8s.go 387: Calico CNI using IPs: [192.168.118.69/32] ContainerID="09eb64996182e1dcda4f4e84d9c56f88db6c9b6567a2190e79eb9945f68f0f32" Namespace="calico-apiserver" Pod="calico-apiserver-546495bb68-x8nmf" WorkloadEndpoint="ci--4081--1--0--6--0b75032dd1.novalocal-k8s-calico--apiserver--546495bb68--x8nmf-eth0" Oct 8 20:22:18.490493 containerd[1597]: 2024-10-08 20:22:18.458 [INFO][4960] dataplane_linux.go 68: Setting the host side veth name to cali2ef7780fc89 ContainerID="09eb64996182e1dcda4f4e84d9c56f88db6c9b6567a2190e79eb9945f68f0f32" Namespace="calico-apiserver" Pod="calico-apiserver-546495bb68-x8nmf" WorkloadEndpoint="ci--4081--1--0--6--0b75032dd1.novalocal-k8s-calico--apiserver--546495bb68--x8nmf-eth0" Oct 8 20:22:18.490493 containerd[1597]: 2024-10-08 20:22:18.463 [INFO][4960] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="09eb64996182e1dcda4f4e84d9c56f88db6c9b6567a2190e79eb9945f68f0f32" Namespace="calico-apiserver" Pod="calico-apiserver-546495bb68-x8nmf" WorkloadEndpoint="ci--4081--1--0--6--0b75032dd1.novalocal-k8s-calico--apiserver--546495bb68--x8nmf-eth0" Oct 8 20:22:18.490493 containerd[1597]: 2024-10-08 20:22:18.466 [INFO][4960] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="09eb64996182e1dcda4f4e84d9c56f88db6c9b6567a2190e79eb9945f68f0f32" Namespace="calico-apiserver" Pod="calico-apiserver-546495bb68-x8nmf" WorkloadEndpoint="ci--4081--1--0--6--0b75032dd1.novalocal-k8s-calico--apiserver--546495bb68--x8nmf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--6--0b75032dd1.novalocal-k8s-calico--apiserver--546495bb68--x8nmf-eth0", GenerateName:"calico-apiserver-546495bb68-", Namespace:"calico-apiserver", SelfLink:"", UID:"e635e37c-a43a-465f-82fd-1d4c492d4544", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 22, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"546495bb68", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-6-0b75032dd1.novalocal", ContainerID:"09eb64996182e1dcda4f4e84d9c56f88db6c9b6567a2190e79eb9945f68f0f32", Pod:"calico-apiserver-546495bb68-x8nmf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.118.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2ef7780fc89", MAC:"8e:de:b3:2c:66:4e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:22:18.490493 containerd[1597]: 2024-10-08 20:22:18.476 [INFO][4960] k8s.go 500: Wrote updated endpoint to datastore ContainerID="09eb64996182e1dcda4f4e84d9c56f88db6c9b6567a2190e79eb9945f68f0f32" Namespace="calico-apiserver" Pod="calico-apiserver-546495bb68-x8nmf" WorkloadEndpoint="ci--4081--1--0--6--0b75032dd1.novalocal-k8s-calico--apiserver--546495bb68--x8nmf-eth0" Oct 8 20:22:18.549267 containerd[1597]: time="2024-10-08T20:22:18.548240564Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:22:18.549267 containerd[1597]: time="2024-10-08T20:22:18.548322728Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:22:18.549267 containerd[1597]: time="2024-10-08T20:22:18.548357403Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:22:18.549267 containerd[1597]: time="2024-10-08T20:22:18.548466689Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:22:18.628335 containerd[1597]: time="2024-10-08T20:22:18.628299664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-546495bb68-x8nmf,Uid:e635e37c-a43a-465f-82fd-1d4c492d4544,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"09eb64996182e1dcda4f4e84d9c56f88db6c9b6567a2190e79eb9945f68f0f32\"" Oct 8 20:22:18.632010 containerd[1597]: time="2024-10-08T20:22:18.631677143Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Oct 8 20:22:19.978510 systemd-networkd[1198]: cali2ef7780fc89: Gained IPv6LL Oct 8 20:22:20.450074 systemd[1]: Started sshd@15-172.24.4.55:22-172.24.4.1:51020.service - OpenSSH per-connection server daemon (172.24.4.1:51020). Oct 8 20:22:21.711504 sshd[5034]: Accepted publickey for core from 172.24.4.1 port 51020 ssh2: RSA SHA256:N4tAxOYyt600zP8LzVHN9krjQqk3csZTCmZq/eMm2uA Oct 8 20:22:21.718209 sshd[5034]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:22:21.723377 systemd-logind[1572]: New session 18 of user core. Oct 8 20:22:21.731221 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 8 20:22:22.284006 systemd-journald[1119]: Under memory pressure, flushing caches. Oct 8 20:22:22.281395 systemd-resolved[1471]: Under memory pressure, flushing caches. Oct 8 20:22:22.281436 systemd-resolved[1471]: Flushed all caches. Oct 8 20:22:22.678790 containerd[1597]: time="2024-10-08T20:22:22.678742775Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:22:22.684753 containerd[1597]: time="2024-10-08T20:22:22.684707497Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=40419849" Oct 8 20:22:22.688521 containerd[1597]: time="2024-10-08T20:22:22.688468487Z" level=info msg="ImageCreate event name:\"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:22:22.699781 containerd[1597]: time="2024-10-08T20:22:22.699697653Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:22:22.701922 containerd[1597]: time="2024-10-08T20:22:22.700923638Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"41912266\" in 4.06920719s" Oct 8 20:22:22.701922 containerd[1597]: time="2024-10-08T20:22:22.700990844Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\"" Oct 8 20:22:22.706146 containerd[1597]: time="2024-10-08T20:22:22.706090269Z" level=info msg="CreateContainer within sandbox \"09eb64996182e1dcda4f4e84d9c56f88db6c9b6567a2190e79eb9945f68f0f32\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 8 20:22:23.146875 containerd[1597]: time="2024-10-08T20:22:23.146468278Z" level=info msg="CreateContainer within sandbox \"09eb64996182e1dcda4f4e84d9c56f88db6c9b6567a2190e79eb9945f68f0f32\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9528c019a8320ef1a32a6c35b0d82e4582b32e7e91ea43a9cbbf83e12e1c1e69\"" Oct 8 20:22:23.153763 containerd[1597]: time="2024-10-08T20:22:23.152603429Z" level=info msg="StartContainer for \"9528c019a8320ef1a32a6c35b0d82e4582b32e7e91ea43a9cbbf83e12e1c1e69\"" Oct 8 20:22:23.297878 containerd[1597]: time="2024-10-08T20:22:23.297828304Z" level=info msg="StartContainer for \"9528c019a8320ef1a32a6c35b0d82e4582b32e7e91ea43a9cbbf83e12e1c1e69\" returns successfully" Oct 8 20:22:23.605830 sshd[5034]: pam_unix(sshd:session): session closed for user core Oct 8 20:22:23.615342 systemd[1]: sshd@15-172.24.4.55:22-172.24.4.1:51020.service: Deactivated successfully. Oct 8 20:22:23.623620 systemd[1]: session-18.scope: Deactivated successfully. Oct 8 20:22:23.626413 systemd-logind[1572]: Session 18 logged out. Waiting for processes to exit. Oct 8 20:22:23.631356 systemd-logind[1572]: Removed session 18. Oct 8 20:22:24.330540 systemd-resolved[1471]: Under memory pressure, flushing caches. Oct 8 20:22:24.331226 systemd-journald[1119]: Under memory pressure, flushing caches. Oct 8 20:22:24.330548 systemd-resolved[1471]: Flushed all caches. Oct 8 20:22:25.054156 kubelet[2889]: I1008 20:22:25.052228 2889 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-546495bb68-x8nmf" podStartSLOduration=3.979689309 podStartE2EDuration="8.052095173s" podCreationTimestamp="2024-10-08 20:22:17 +0000 UTC" firstStartedPulling="2024-10-08 20:22:18.629986435 +0000 UTC m=+106.573936417" lastFinishedPulling="2024-10-08 20:22:22.702392299 +0000 UTC m=+110.646342281" observedRunningTime="2024-10-08 20:22:24.031659692 +0000 UTC m=+111.975609684" watchObservedRunningTime="2024-10-08 20:22:25.052095173 +0000 UTC m=+112.996045165" Oct 8 20:22:28.617888 systemd[1]: Started sshd@16-172.24.4.55:22-172.24.4.1:60818.service - OpenSSH per-connection server daemon (172.24.4.1:60818). Oct 8 20:22:29.686276 sshd[5125]: Accepted publickey for core from 172.24.4.1 port 60818 ssh2: RSA SHA256:N4tAxOYyt600zP8LzVHN9krjQqk3csZTCmZq/eMm2uA Oct 8 20:22:29.692881 sshd[5125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:22:29.713438 systemd-logind[1572]: New session 19 of user core. Oct 8 20:22:29.718277 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 8 20:22:30.758133 sshd[5125]: pam_unix(sshd:session): session closed for user core Oct 8 20:22:30.764751 systemd[1]: sshd@16-172.24.4.55:22-172.24.4.1:60818.service: Deactivated successfully. Oct 8 20:22:30.772758 systemd[1]: session-19.scope: Deactivated successfully. Oct 8 20:22:30.775610 systemd-logind[1572]: Session 19 logged out. Waiting for processes to exit. Oct 8 20:22:30.778206 systemd-logind[1572]: Removed session 19. Oct 8 20:22:32.466424 containerd[1597]: time="2024-10-08T20:22:32.466289557Z" level=info msg="StopPodSandbox for \"4aad6bd594c4808843f7e6b5ae7b1ea6a1e41b402c15ea308fe89f81bde7df2e\"" Oct 8 20:22:32.695543 containerd[1597]: 2024-10-08 20:22:32.641 [WARNING][5157] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4aad6bd594c4808843f7e6b5ae7b1ea6a1e41b402c15ea308fe89f81bde7df2e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--6--0b75032dd1.novalocal-k8s-coredns--76f75df574--zsvnc-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"69d7ca85-a209-4e6f-9a16-eb38c4d84f95", ResourceVersion:"757", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 20, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-6-0b75032dd1.novalocal", ContainerID:"a6ef0fd7d0b0814ccd19e32a5f1fe340553d59a63122ab41ea197feaddf73dc1", Pod:"coredns-76f75df574-zsvnc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.118.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0a0e6af8ddf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:22:32.695543 containerd[1597]: 2024-10-08 20:22:32.641 [INFO][5157] k8s.go 608: Cleaning up netns ContainerID="4aad6bd594c4808843f7e6b5ae7b1ea6a1e41b402c15ea308fe89f81bde7df2e" Oct 8 20:22:32.695543 containerd[1597]: 2024-10-08 20:22:32.641 [INFO][5157] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="4aad6bd594c4808843f7e6b5ae7b1ea6a1e41b402c15ea308fe89f81bde7df2e" iface="eth0" netns="" Oct 8 20:22:32.695543 containerd[1597]: 2024-10-08 20:22:32.642 [INFO][5157] k8s.go 615: Releasing IP address(es) ContainerID="4aad6bd594c4808843f7e6b5ae7b1ea6a1e41b402c15ea308fe89f81bde7df2e" Oct 8 20:22:32.695543 containerd[1597]: 2024-10-08 20:22:32.642 [INFO][5157] utils.go 188: Calico CNI releasing IP address ContainerID="4aad6bd594c4808843f7e6b5ae7b1ea6a1e41b402c15ea308fe89f81bde7df2e" Oct 8 20:22:32.695543 containerd[1597]: 2024-10-08 20:22:32.669 [INFO][5163] ipam_plugin.go 417: Releasing address using handleID ContainerID="4aad6bd594c4808843f7e6b5ae7b1ea6a1e41b402c15ea308fe89f81bde7df2e" HandleID="k8s-pod-network.4aad6bd594c4808843f7e6b5ae7b1ea6a1e41b402c15ea308fe89f81bde7df2e" Workload="ci--4081--1--0--6--0b75032dd1.novalocal-k8s-coredns--76f75df574--zsvnc-eth0" Oct 8 20:22:32.695543 containerd[1597]: 2024-10-08 20:22:32.670 [INFO][5163] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:22:32.695543 containerd[1597]: 2024-10-08 20:22:32.670 [INFO][5163] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:22:32.695543 containerd[1597]: 2024-10-08 20:22:32.687 [WARNING][5163] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="4aad6bd594c4808843f7e6b5ae7b1ea6a1e41b402c15ea308fe89f81bde7df2e" HandleID="k8s-pod-network.4aad6bd594c4808843f7e6b5ae7b1ea6a1e41b402c15ea308fe89f81bde7df2e" Workload="ci--4081--1--0--6--0b75032dd1.novalocal-k8s-coredns--76f75df574--zsvnc-eth0" Oct 8 20:22:32.695543 containerd[1597]: 2024-10-08 20:22:32.688 [INFO][5163] ipam_plugin.go 445: Releasing address using workloadID ContainerID="4aad6bd594c4808843f7e6b5ae7b1ea6a1e41b402c15ea308fe89f81bde7df2e" HandleID="k8s-pod-network.4aad6bd594c4808843f7e6b5ae7b1ea6a1e41b402c15ea308fe89f81bde7df2e" Workload="ci--4081--1--0--6--0b75032dd1.novalocal-k8s-coredns--76f75df574--zsvnc-eth0" Oct 8 20:22:32.695543 containerd[1597]: 2024-10-08 20:22:32.691 [INFO][5163] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:22:32.695543 containerd[1597]: 2024-10-08 20:22:32.694 [INFO][5157] k8s.go 621: Teardown processing complete. ContainerID="4aad6bd594c4808843f7e6b5ae7b1ea6a1e41b402c15ea308fe89f81bde7df2e" Oct 8 20:22:32.696211 containerd[1597]: time="2024-10-08T20:22:32.695630501Z" level=info msg="TearDown network for sandbox \"4aad6bd594c4808843f7e6b5ae7b1ea6a1e41b402c15ea308fe89f81bde7df2e\" successfully" Oct 8 20:22:32.696211 containerd[1597]: time="2024-10-08T20:22:32.695675956Z" level=info msg="StopPodSandbox for \"4aad6bd594c4808843f7e6b5ae7b1ea6a1e41b402c15ea308fe89f81bde7df2e\" returns successfully" Oct 8 20:22:32.731213 containerd[1597]: time="2024-10-08T20:22:32.730585619Z" level=info msg="RemovePodSandbox for \"4aad6bd594c4808843f7e6b5ae7b1ea6a1e41b402c15ea308fe89f81bde7df2e\"" Oct 8 20:22:32.735817 containerd[1597]: time="2024-10-08T20:22:32.735547634Z" level=info msg="Forcibly stopping sandbox \"4aad6bd594c4808843f7e6b5ae7b1ea6a1e41b402c15ea308fe89f81bde7df2e\"" Oct 8 20:22:32.827274 containerd[1597]: 2024-10-08 20:22:32.790 [WARNING][5181] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4aad6bd594c4808843f7e6b5ae7b1ea6a1e41b402c15ea308fe89f81bde7df2e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--6--0b75032dd1.novalocal-k8s-coredns--76f75df574--zsvnc-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"69d7ca85-a209-4e6f-9a16-eb38c4d84f95", ResourceVersion:"757", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 20, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-6-0b75032dd1.novalocal", ContainerID:"a6ef0fd7d0b0814ccd19e32a5f1fe340553d59a63122ab41ea197feaddf73dc1", Pod:"coredns-76f75df574-zsvnc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.118.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0a0e6af8ddf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:22:32.827274 containerd[1597]: 2024-10-08 20:22:32.791 [INFO][5181] k8s.go 608: Cleaning up netns ContainerID="4aad6bd594c4808843f7e6b5ae7b1ea6a1e41b402c15ea308fe89f81bde7df2e" Oct 8 20:22:32.827274 containerd[1597]: 2024-10-08 20:22:32.791 [INFO][5181] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="4aad6bd594c4808843f7e6b5ae7b1ea6a1e41b402c15ea308fe89f81bde7df2e" iface="eth0" netns="" Oct 8 20:22:32.827274 containerd[1597]: 2024-10-08 20:22:32.791 [INFO][5181] k8s.go 615: Releasing IP address(es) ContainerID="4aad6bd594c4808843f7e6b5ae7b1ea6a1e41b402c15ea308fe89f81bde7df2e" Oct 8 20:22:32.827274 containerd[1597]: 2024-10-08 20:22:32.791 [INFO][5181] utils.go 188: Calico CNI releasing IP address ContainerID="4aad6bd594c4808843f7e6b5ae7b1ea6a1e41b402c15ea308fe89f81bde7df2e" Oct 8 20:22:32.827274 containerd[1597]: 2024-10-08 20:22:32.814 [INFO][5187] ipam_plugin.go 417: Releasing address using handleID ContainerID="4aad6bd594c4808843f7e6b5ae7b1ea6a1e41b402c15ea308fe89f81bde7df2e" HandleID="k8s-pod-network.4aad6bd594c4808843f7e6b5ae7b1ea6a1e41b402c15ea308fe89f81bde7df2e" Workload="ci--4081--1--0--6--0b75032dd1.novalocal-k8s-coredns--76f75df574--zsvnc-eth0" Oct 8 20:22:32.827274 containerd[1597]: 2024-10-08 20:22:32.815 [INFO][5187] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:22:32.827274 containerd[1597]: 2024-10-08 20:22:32.815 [INFO][5187] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:22:32.827274 containerd[1597]: 2024-10-08 20:22:32.822 [WARNING][5187] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="4aad6bd594c4808843f7e6b5ae7b1ea6a1e41b402c15ea308fe89f81bde7df2e" HandleID="k8s-pod-network.4aad6bd594c4808843f7e6b5ae7b1ea6a1e41b402c15ea308fe89f81bde7df2e" Workload="ci--4081--1--0--6--0b75032dd1.novalocal-k8s-coredns--76f75df574--zsvnc-eth0" Oct 8 20:22:32.827274 containerd[1597]: 2024-10-08 20:22:32.822 [INFO][5187] ipam_plugin.go 445: Releasing address using workloadID ContainerID="4aad6bd594c4808843f7e6b5ae7b1ea6a1e41b402c15ea308fe89f81bde7df2e" HandleID="k8s-pod-network.4aad6bd594c4808843f7e6b5ae7b1ea6a1e41b402c15ea308fe89f81bde7df2e" Workload="ci--4081--1--0--6--0b75032dd1.novalocal-k8s-coredns--76f75df574--zsvnc-eth0" Oct 8 20:22:32.827274 containerd[1597]: 2024-10-08 20:22:32.824 [INFO][5187] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:22:32.827274 containerd[1597]: 2024-10-08 20:22:32.825 [INFO][5181] k8s.go 621: Teardown processing complete. ContainerID="4aad6bd594c4808843f7e6b5ae7b1ea6a1e41b402c15ea308fe89f81bde7df2e" Oct 8 20:22:32.828539 containerd[1597]: time="2024-10-08T20:22:32.827623155Z" level=info msg="TearDown network for sandbox \"4aad6bd594c4808843f7e6b5ae7b1ea6a1e41b402c15ea308fe89f81bde7df2e\" successfully" Oct 8 20:22:32.840612 containerd[1597]: time="2024-10-08T20:22:32.840372304Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4aad6bd594c4808843f7e6b5ae7b1ea6a1e41b402c15ea308fe89f81bde7df2e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 20:22:32.840612 containerd[1597]: time="2024-10-08T20:22:32.840444439Z" level=info msg="RemovePodSandbox \"4aad6bd594c4808843f7e6b5ae7b1ea6a1e41b402c15ea308fe89f81bde7df2e\" returns successfully" Oct 8 20:22:32.841934 containerd[1597]: time="2024-10-08T20:22:32.841536902Z" level=info msg="StopPodSandbox for \"b0b3e3d6b839ae2ba1a229ae13887b655d0a16b532ca8f3d8895320833a6e987\"" Oct 8 20:22:32.948466 containerd[1597]: 2024-10-08 20:22:32.892 [WARNING][5205] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b0b3e3d6b839ae2ba1a229ae13887b655d0a16b532ca8f3d8895320833a6e987" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--6--0b75032dd1.novalocal-k8s-csi--node--driver--cbc7k-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a5649738-e837-4158-bf1c-576a5e896847", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 20, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-6-0b75032dd1.novalocal", ContainerID:"be276f276decd07f604d09bebaa98629b68245166544867f68c2050714b3289c", Pod:"csi-node-driver-cbc7k", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.118.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali5c98545cb64", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:22:32.948466 containerd[1597]: 2024-10-08 20:22:32.892 [INFO][5205] k8s.go 608: Cleaning up netns ContainerID="b0b3e3d6b839ae2ba1a229ae13887b655d0a16b532ca8f3d8895320833a6e987" Oct 8 20:22:32.948466 containerd[1597]: 2024-10-08 20:22:32.892 [INFO][5205] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="b0b3e3d6b839ae2ba1a229ae13887b655d0a16b532ca8f3d8895320833a6e987" iface="eth0" netns="" Oct 8 20:22:32.948466 containerd[1597]: 2024-10-08 20:22:32.892 [INFO][5205] k8s.go 615: Releasing IP address(es) ContainerID="b0b3e3d6b839ae2ba1a229ae13887b655d0a16b532ca8f3d8895320833a6e987" Oct 8 20:22:32.948466 containerd[1597]: 2024-10-08 20:22:32.892 [INFO][5205] utils.go 188: Calico CNI releasing IP address ContainerID="b0b3e3d6b839ae2ba1a229ae13887b655d0a16b532ca8f3d8895320833a6e987" Oct 8 20:22:32.948466 containerd[1597]: 2024-10-08 20:22:32.935 [INFO][5211] ipam_plugin.go 417: Releasing address using handleID ContainerID="b0b3e3d6b839ae2ba1a229ae13887b655d0a16b532ca8f3d8895320833a6e987" HandleID="k8s-pod-network.b0b3e3d6b839ae2ba1a229ae13887b655d0a16b532ca8f3d8895320833a6e987" Workload="ci--4081--1--0--6--0b75032dd1.novalocal-k8s-csi--node--driver--cbc7k-eth0" Oct 8 20:22:32.948466 containerd[1597]: 2024-10-08 20:22:32.935 [INFO][5211] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:22:32.948466 containerd[1597]: 2024-10-08 20:22:32.935 [INFO][5211] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:22:32.948466 containerd[1597]: 2024-10-08 20:22:32.943 [WARNING][5211] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="b0b3e3d6b839ae2ba1a229ae13887b655d0a16b532ca8f3d8895320833a6e987" HandleID="k8s-pod-network.b0b3e3d6b839ae2ba1a229ae13887b655d0a16b532ca8f3d8895320833a6e987" Workload="ci--4081--1--0--6--0b75032dd1.novalocal-k8s-csi--node--driver--cbc7k-eth0" Oct 8 20:22:32.948466 containerd[1597]: 2024-10-08 20:22:32.944 [INFO][5211] ipam_plugin.go 445: Releasing address using workloadID ContainerID="b0b3e3d6b839ae2ba1a229ae13887b655d0a16b532ca8f3d8895320833a6e987" HandleID="k8s-pod-network.b0b3e3d6b839ae2ba1a229ae13887b655d0a16b532ca8f3d8895320833a6e987" Workload="ci--4081--1--0--6--0b75032dd1.novalocal-k8s-csi--node--driver--cbc7k-eth0" Oct 8 20:22:32.948466 containerd[1597]: 2024-10-08 20:22:32.945 [INFO][5211] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:22:32.948466 containerd[1597]: 2024-10-08 20:22:32.947 [INFO][5205] k8s.go 621: Teardown processing complete. ContainerID="b0b3e3d6b839ae2ba1a229ae13887b655d0a16b532ca8f3d8895320833a6e987" Oct 8 20:22:32.948466 containerd[1597]: time="2024-10-08T20:22:32.948457199Z" level=info msg="TearDown network for sandbox \"b0b3e3d6b839ae2ba1a229ae13887b655d0a16b532ca8f3d8895320833a6e987\" successfully" Oct 8 20:22:32.950086 containerd[1597]: time="2024-10-08T20:22:32.948483459Z" level=info msg="StopPodSandbox for \"b0b3e3d6b839ae2ba1a229ae13887b655d0a16b532ca8f3d8895320833a6e987\" returns successfully" Oct 8 20:22:32.950086 containerd[1597]: time="2024-10-08T20:22:32.949199074Z" level=info msg="RemovePodSandbox for \"b0b3e3d6b839ae2ba1a229ae13887b655d0a16b532ca8f3d8895320833a6e987\"" Oct 8 20:22:32.950086 containerd[1597]: time="2024-10-08T20:22:32.949235943Z" level=info msg="Forcibly stopping sandbox \"b0b3e3d6b839ae2ba1a229ae13887b655d0a16b532ca8f3d8895320833a6e987\"" Oct 8 20:22:33.025013 containerd[1597]: 2024-10-08 20:22:32.988 [WARNING][5229] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b0b3e3d6b839ae2ba1a229ae13887b655d0a16b532ca8f3d8895320833a6e987" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--6--0b75032dd1.novalocal-k8s-csi--node--driver--cbc7k-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a5649738-e837-4158-bf1c-576a5e896847", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 20, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-6-0b75032dd1.novalocal", ContainerID:"be276f276decd07f604d09bebaa98629b68245166544867f68c2050714b3289c", Pod:"csi-node-driver-cbc7k", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.118.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali5c98545cb64", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:22:33.025013 containerd[1597]: 2024-10-08 20:22:32.988 [INFO][5229] k8s.go 608: Cleaning up netns ContainerID="b0b3e3d6b839ae2ba1a229ae13887b655d0a16b532ca8f3d8895320833a6e987" Oct 8 20:22:33.025013 containerd[1597]: 2024-10-08 20:22:32.989 [INFO][5229] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="b0b3e3d6b839ae2ba1a229ae13887b655d0a16b532ca8f3d8895320833a6e987" iface="eth0" netns="" Oct 8 20:22:33.025013 containerd[1597]: 2024-10-08 20:22:32.989 [INFO][5229] k8s.go 615: Releasing IP address(es) ContainerID="b0b3e3d6b839ae2ba1a229ae13887b655d0a16b532ca8f3d8895320833a6e987" Oct 8 20:22:33.025013 containerd[1597]: 2024-10-08 20:22:32.989 [INFO][5229] utils.go 188: Calico CNI releasing IP address ContainerID="b0b3e3d6b839ae2ba1a229ae13887b655d0a16b532ca8f3d8895320833a6e987" Oct 8 20:22:33.025013 containerd[1597]: 2024-10-08 20:22:33.012 [INFO][5235] ipam_plugin.go 417: Releasing address using handleID ContainerID="b0b3e3d6b839ae2ba1a229ae13887b655d0a16b532ca8f3d8895320833a6e987" HandleID="k8s-pod-network.b0b3e3d6b839ae2ba1a229ae13887b655d0a16b532ca8f3d8895320833a6e987" Workload="ci--4081--1--0--6--0b75032dd1.novalocal-k8s-csi--node--driver--cbc7k-eth0" Oct 8 20:22:33.025013 containerd[1597]: 2024-10-08 20:22:33.012 [INFO][5235] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:22:33.025013 containerd[1597]: 2024-10-08 20:22:33.013 [INFO][5235] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:22:33.025013 containerd[1597]: 2024-10-08 20:22:33.020 [WARNING][5235] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="b0b3e3d6b839ae2ba1a229ae13887b655d0a16b532ca8f3d8895320833a6e987" HandleID="k8s-pod-network.b0b3e3d6b839ae2ba1a229ae13887b655d0a16b532ca8f3d8895320833a6e987" Workload="ci--4081--1--0--6--0b75032dd1.novalocal-k8s-csi--node--driver--cbc7k-eth0" Oct 8 20:22:33.025013 containerd[1597]: 2024-10-08 20:22:33.020 [INFO][5235] ipam_plugin.go 445: Releasing address using workloadID ContainerID="b0b3e3d6b839ae2ba1a229ae13887b655d0a16b532ca8f3d8895320833a6e987" HandleID="k8s-pod-network.b0b3e3d6b839ae2ba1a229ae13887b655d0a16b532ca8f3d8895320833a6e987" Workload="ci--4081--1--0--6--0b75032dd1.novalocal-k8s-csi--node--driver--cbc7k-eth0" Oct 8 20:22:33.025013 containerd[1597]: 2024-10-08 20:22:33.021 [INFO][5235] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:22:33.025013 containerd[1597]: 2024-10-08 20:22:33.022 [INFO][5229] k8s.go 621: Teardown processing complete. ContainerID="b0b3e3d6b839ae2ba1a229ae13887b655d0a16b532ca8f3d8895320833a6e987" Oct 8 20:22:33.026451 containerd[1597]: time="2024-10-08T20:22:33.025847624Z" level=info msg="TearDown network for sandbox \"b0b3e3d6b839ae2ba1a229ae13887b655d0a16b532ca8f3d8895320833a6e987\" successfully" Oct 8 20:22:33.030499 containerd[1597]: time="2024-10-08T20:22:33.030467044Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b0b3e3d6b839ae2ba1a229ae13887b655d0a16b532ca8f3d8895320833a6e987\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 20:22:33.030568 containerd[1597]: time="2024-10-08T20:22:33.030533128Z" level=info msg="RemovePodSandbox \"b0b3e3d6b839ae2ba1a229ae13887b655d0a16b532ca8f3d8895320833a6e987\" returns successfully" Oct 8 20:22:33.030977 containerd[1597]: time="2024-10-08T20:22:33.030932629Z" level=info msg="StopPodSandbox for \"5c69c1c44bd69df7f16fa4d63e47283fcd23396e4f2de0f8bbc77d98ae66ab83\"" Oct 8 20:22:33.104761 containerd[1597]: 2024-10-08 20:22:33.069 [WARNING][5253] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5c69c1c44bd69df7f16fa4d63e47283fcd23396e4f2de0f8bbc77d98ae66ab83" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--6--0b75032dd1.novalocal-k8s-coredns--76f75df574--wdqtm-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"2e3f14d6-abc0-4ffa-afc1-278f613cc677", ResourceVersion:"774", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 20, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-6-0b75032dd1.novalocal", ContainerID:"ac86c015d5d42487893bdf23f89541d6eb32494ca0e91befccc9b59f27e0f7aa", Pod:"coredns-76f75df574-wdqtm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.118.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali56842a4e00e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:22:33.104761 containerd[1597]: 2024-10-08 20:22:33.069 [INFO][5253] k8s.go 608: Cleaning up netns ContainerID="5c69c1c44bd69df7f16fa4d63e47283fcd23396e4f2de0f8bbc77d98ae66ab83" Oct 8 20:22:33.104761 containerd[1597]: 2024-10-08 20:22:33.070 [INFO][5253] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="5c69c1c44bd69df7f16fa4d63e47283fcd23396e4f2de0f8bbc77d98ae66ab83" iface="eth0" netns="" Oct 8 20:22:33.104761 containerd[1597]: 2024-10-08 20:22:33.070 [INFO][5253] k8s.go 615: Releasing IP address(es) ContainerID="5c69c1c44bd69df7f16fa4d63e47283fcd23396e4f2de0f8bbc77d98ae66ab83" Oct 8 20:22:33.104761 containerd[1597]: 2024-10-08 20:22:33.070 [INFO][5253] utils.go 188: Calico CNI releasing IP address ContainerID="5c69c1c44bd69df7f16fa4d63e47283fcd23396e4f2de0f8bbc77d98ae66ab83" Oct 8 20:22:33.104761 containerd[1597]: 2024-10-08 20:22:33.092 [INFO][5259] ipam_plugin.go 417: Releasing address using handleID ContainerID="5c69c1c44bd69df7f16fa4d63e47283fcd23396e4f2de0f8bbc77d98ae66ab83" HandleID="k8s-pod-network.5c69c1c44bd69df7f16fa4d63e47283fcd23396e4f2de0f8bbc77d98ae66ab83" Workload="ci--4081--1--0--6--0b75032dd1.novalocal-k8s-coredns--76f75df574--wdqtm-eth0" Oct 8 20:22:33.104761 containerd[1597]: 2024-10-08 20:22:33.092 [INFO][5259] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:22:33.104761 containerd[1597]: 2024-10-08 20:22:33.092 [INFO][5259] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:22:33.104761 containerd[1597]: 2024-10-08 20:22:33.099 [WARNING][5259] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="5c69c1c44bd69df7f16fa4d63e47283fcd23396e4f2de0f8bbc77d98ae66ab83" HandleID="k8s-pod-network.5c69c1c44bd69df7f16fa4d63e47283fcd23396e4f2de0f8bbc77d98ae66ab83" Workload="ci--4081--1--0--6--0b75032dd1.novalocal-k8s-coredns--76f75df574--wdqtm-eth0" Oct 8 20:22:33.104761 containerd[1597]: 2024-10-08 20:22:33.099 [INFO][5259] ipam_plugin.go 445: Releasing address using workloadID ContainerID="5c69c1c44bd69df7f16fa4d63e47283fcd23396e4f2de0f8bbc77d98ae66ab83" HandleID="k8s-pod-network.5c69c1c44bd69df7f16fa4d63e47283fcd23396e4f2de0f8bbc77d98ae66ab83" Workload="ci--4081--1--0--6--0b75032dd1.novalocal-k8s-coredns--76f75df574--wdqtm-eth0" Oct 8 20:22:33.104761 containerd[1597]: 2024-10-08 20:22:33.101 [INFO][5259] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:22:33.104761 containerd[1597]: 2024-10-08 20:22:33.103 [INFO][5253] k8s.go 621: Teardown processing complete. ContainerID="5c69c1c44bd69df7f16fa4d63e47283fcd23396e4f2de0f8bbc77d98ae66ab83" Oct 8 20:22:33.107124 containerd[1597]: time="2024-10-08T20:22:33.104984127Z" level=info msg="TearDown network for sandbox \"5c69c1c44bd69df7f16fa4d63e47283fcd23396e4f2de0f8bbc77d98ae66ab83\" successfully" Oct 8 20:22:33.107124 containerd[1597]: time="2024-10-08T20:22:33.105013502Z" level=info msg="StopPodSandbox for \"5c69c1c44bd69df7f16fa4d63e47283fcd23396e4f2de0f8bbc77d98ae66ab83\" returns successfully" Oct 8 20:22:33.107124 containerd[1597]: time="2024-10-08T20:22:33.106270764Z" level=info msg="RemovePodSandbox for \"5c69c1c44bd69df7f16fa4d63e47283fcd23396e4f2de0f8bbc77d98ae66ab83\"" Oct 8 20:22:33.107124 containerd[1597]: time="2024-10-08T20:22:33.106296662Z" level=info msg="Forcibly stopping sandbox \"5c69c1c44bd69df7f16fa4d63e47283fcd23396e4f2de0f8bbc77d98ae66ab83\"" Oct 8 20:22:33.180214 containerd[1597]: 2024-10-08 20:22:33.143 [WARNING][5277] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5c69c1c44bd69df7f16fa4d63e47283fcd23396e4f2de0f8bbc77d98ae66ab83" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--6--0b75032dd1.novalocal-k8s-coredns--76f75df574--wdqtm-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"2e3f14d6-abc0-4ffa-afc1-278f613cc677", ResourceVersion:"774", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 20, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-6-0b75032dd1.novalocal", ContainerID:"ac86c015d5d42487893bdf23f89541d6eb32494ca0e91befccc9b59f27e0f7aa", Pod:"coredns-76f75df574-wdqtm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.118.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali56842a4e00e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:22:33.180214 containerd[1597]: 2024-10-08 20:22:33.143 [INFO][5277] k8s.go 608: Cleaning up netns ContainerID="5c69c1c44bd69df7f16fa4d63e47283fcd23396e4f2de0f8bbc77d98ae66ab83" Oct 8 20:22:33.180214 containerd[1597]: 2024-10-08 20:22:33.143 [INFO][5277] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="5c69c1c44bd69df7f16fa4d63e47283fcd23396e4f2de0f8bbc77d98ae66ab83" iface="eth0" netns="" Oct 8 20:22:33.180214 containerd[1597]: 2024-10-08 20:22:33.143 [INFO][5277] k8s.go 615: Releasing IP address(es) ContainerID="5c69c1c44bd69df7f16fa4d63e47283fcd23396e4f2de0f8bbc77d98ae66ab83" Oct 8 20:22:33.180214 containerd[1597]: 2024-10-08 20:22:33.143 [INFO][5277] utils.go 188: Calico CNI releasing IP address ContainerID="5c69c1c44bd69df7f16fa4d63e47283fcd23396e4f2de0f8bbc77d98ae66ab83" Oct 8 20:22:33.180214 containerd[1597]: 2024-10-08 20:22:33.167 [INFO][5283] ipam_plugin.go 417: Releasing address using handleID ContainerID="5c69c1c44bd69df7f16fa4d63e47283fcd23396e4f2de0f8bbc77d98ae66ab83" HandleID="k8s-pod-network.5c69c1c44bd69df7f16fa4d63e47283fcd23396e4f2de0f8bbc77d98ae66ab83" Workload="ci--4081--1--0--6--0b75032dd1.novalocal-k8s-coredns--76f75df574--wdqtm-eth0" Oct 8 20:22:33.180214 containerd[1597]: 2024-10-08 20:22:33.168 [INFO][5283] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:22:33.180214 containerd[1597]: 2024-10-08 20:22:33.168 [INFO][5283] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:22:33.180214 containerd[1597]: 2024-10-08 20:22:33.175 [WARNING][5283] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="5c69c1c44bd69df7f16fa4d63e47283fcd23396e4f2de0f8bbc77d98ae66ab83" HandleID="k8s-pod-network.5c69c1c44bd69df7f16fa4d63e47283fcd23396e4f2de0f8bbc77d98ae66ab83" Workload="ci--4081--1--0--6--0b75032dd1.novalocal-k8s-coredns--76f75df574--wdqtm-eth0" Oct 8 20:22:33.180214 containerd[1597]: 2024-10-08 20:22:33.175 [INFO][5283] ipam_plugin.go 445: Releasing address using workloadID ContainerID="5c69c1c44bd69df7f16fa4d63e47283fcd23396e4f2de0f8bbc77d98ae66ab83" HandleID="k8s-pod-network.5c69c1c44bd69df7f16fa4d63e47283fcd23396e4f2de0f8bbc77d98ae66ab83" Workload="ci--4081--1--0--6--0b75032dd1.novalocal-k8s-coredns--76f75df574--wdqtm-eth0" Oct 8 20:22:33.180214 containerd[1597]: 2024-10-08 20:22:33.177 [INFO][5283] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:22:33.180214 containerd[1597]: 2024-10-08 20:22:33.178 [INFO][5277] k8s.go 621: Teardown processing complete. ContainerID="5c69c1c44bd69df7f16fa4d63e47283fcd23396e4f2de0f8bbc77d98ae66ab83" Oct 8 20:22:33.180214 containerd[1597]: time="2024-10-08T20:22:33.180009313Z" level=info msg="TearDown network for sandbox \"5c69c1c44bd69df7f16fa4d63e47283fcd23396e4f2de0f8bbc77d98ae66ab83\" successfully" Oct 8 20:22:33.185034 containerd[1597]: time="2024-10-08T20:22:33.184848637Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5c69c1c44bd69df7f16fa4d63e47283fcd23396e4f2de0f8bbc77d98ae66ab83\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 20:22:33.185034 containerd[1597]: time="2024-10-08T20:22:33.184914721Z" level=info msg="RemovePodSandbox \"5c69c1c44bd69df7f16fa4d63e47283fcd23396e4f2de0f8bbc77d98ae66ab83\" returns successfully" Oct 8 20:22:33.185895 containerd[1597]: time="2024-10-08T20:22:33.185561517Z" level=info msg="StopPodSandbox for \"34fdef20a54d7ab59fcd701ff75013b89f11089c2d3ea96cebe455d87fa7afc9\"" Oct 8 20:22:33.258899 containerd[1597]: 2024-10-08 20:22:33.224 [WARNING][5301] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="34fdef20a54d7ab59fcd701ff75013b89f11089c2d3ea96cebe455d87fa7afc9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--6--0b75032dd1.novalocal-k8s-calico--kube--controllers--6644949fbd--28g75-eth0", GenerateName:"calico-kube-controllers-6644949fbd-", Namespace:"calico-system", SelfLink:"", UID:"8cc7ecf6-b867-4caf-8b32-1704c973cd44", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 20, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6644949fbd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-6-0b75032dd1.novalocal", ContainerID:"a61b89a9d8ebff92e5d6273aae5ee90dad4d8ccb0d5632976811802590fbe7ef", Pod:"calico-kube-controllers-6644949fbd-28g75", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.118.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali42de0926d83", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:22:33.258899 containerd[1597]: 2024-10-08 20:22:33.225 [INFO][5301] k8s.go 608: Cleaning up netns ContainerID="34fdef20a54d7ab59fcd701ff75013b89f11089c2d3ea96cebe455d87fa7afc9" Oct 8 20:22:33.258899 containerd[1597]: 2024-10-08 20:22:33.225 [INFO][5301] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="34fdef20a54d7ab59fcd701ff75013b89f11089c2d3ea96cebe455d87fa7afc9" iface="eth0" netns="" Oct 8 20:22:33.258899 containerd[1597]: 2024-10-08 20:22:33.225 [INFO][5301] k8s.go 615: Releasing IP address(es) ContainerID="34fdef20a54d7ab59fcd701ff75013b89f11089c2d3ea96cebe455d87fa7afc9" Oct 8 20:22:33.258899 containerd[1597]: 2024-10-08 20:22:33.225 [INFO][5301] utils.go 188: Calico CNI releasing IP address ContainerID="34fdef20a54d7ab59fcd701ff75013b89f11089c2d3ea96cebe455d87fa7afc9" Oct 8 20:22:33.258899 containerd[1597]: 2024-10-08 20:22:33.246 [INFO][5307] ipam_plugin.go 417: Releasing address using handleID ContainerID="34fdef20a54d7ab59fcd701ff75013b89f11089c2d3ea96cebe455d87fa7afc9" HandleID="k8s-pod-network.34fdef20a54d7ab59fcd701ff75013b89f11089c2d3ea96cebe455d87fa7afc9" Workload="ci--4081--1--0--6--0b75032dd1.novalocal-k8s-calico--kube--controllers--6644949fbd--28g75-eth0" Oct 8 20:22:33.258899 containerd[1597]: 2024-10-08 20:22:33.246 [INFO][5307] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:22:33.258899 containerd[1597]: 2024-10-08 20:22:33.246 [INFO][5307] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:22:33.258899 containerd[1597]: 2024-10-08 20:22:33.254 [WARNING][5307] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="34fdef20a54d7ab59fcd701ff75013b89f11089c2d3ea96cebe455d87fa7afc9" HandleID="k8s-pod-network.34fdef20a54d7ab59fcd701ff75013b89f11089c2d3ea96cebe455d87fa7afc9" Workload="ci--4081--1--0--6--0b75032dd1.novalocal-k8s-calico--kube--controllers--6644949fbd--28g75-eth0" Oct 8 20:22:33.258899 containerd[1597]: 2024-10-08 20:22:33.254 [INFO][5307] ipam_plugin.go 445: Releasing address using workloadID ContainerID="34fdef20a54d7ab59fcd701ff75013b89f11089c2d3ea96cebe455d87fa7afc9" HandleID="k8s-pod-network.34fdef20a54d7ab59fcd701ff75013b89f11089c2d3ea96cebe455d87fa7afc9" Workload="ci--4081--1--0--6--0b75032dd1.novalocal-k8s-calico--kube--controllers--6644949fbd--28g75-eth0" Oct 8 20:22:33.258899 containerd[1597]: 2024-10-08 20:22:33.256 [INFO][5307] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:22:33.258899 containerd[1597]: 2024-10-08 20:22:33.257 [INFO][5301] k8s.go 621: Teardown processing complete. ContainerID="34fdef20a54d7ab59fcd701ff75013b89f11089c2d3ea96cebe455d87fa7afc9" Oct 8 20:22:33.260757 containerd[1597]: time="2024-10-08T20:22:33.259377802Z" level=info msg="TearDown network for sandbox \"34fdef20a54d7ab59fcd701ff75013b89f11089c2d3ea96cebe455d87fa7afc9\" successfully" Oct 8 20:22:33.260757 containerd[1597]: time="2024-10-08T20:22:33.259421824Z" level=info msg="StopPodSandbox for \"34fdef20a54d7ab59fcd701ff75013b89f11089c2d3ea96cebe455d87fa7afc9\" returns successfully" Oct 8 20:22:33.260757 containerd[1597]: time="2024-10-08T20:22:33.260026591Z" level=info msg="RemovePodSandbox for \"34fdef20a54d7ab59fcd701ff75013b89f11089c2d3ea96cebe455d87fa7afc9\"" Oct 8 20:22:33.260757 containerd[1597]: time="2024-10-08T20:22:33.260070533Z" level=info msg="Forcibly stopping sandbox \"34fdef20a54d7ab59fcd701ff75013b89f11089c2d3ea96cebe455d87fa7afc9\"" Oct 8 20:22:33.362104 containerd[1597]: 2024-10-08 20:22:33.325 [WARNING][5326] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="34fdef20a54d7ab59fcd701ff75013b89f11089c2d3ea96cebe455d87fa7afc9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--6--0b75032dd1.novalocal-k8s-calico--kube--controllers--6644949fbd--28g75-eth0", GenerateName:"calico-kube-controllers-6644949fbd-", Namespace:"calico-system", SelfLink:"", UID:"8cc7ecf6-b867-4caf-8b32-1704c973cd44", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 20, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6644949fbd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-6-0b75032dd1.novalocal", ContainerID:"a61b89a9d8ebff92e5d6273aae5ee90dad4d8ccb0d5632976811802590fbe7ef", Pod:"calico-kube-controllers-6644949fbd-28g75", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.118.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali42de0926d83", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:22:33.362104 containerd[1597]: 2024-10-08 20:22:33.326 [INFO][5326] k8s.go 608: Cleaning up netns ContainerID="34fdef20a54d7ab59fcd701ff75013b89f11089c2d3ea96cebe455d87fa7afc9" Oct 8 20:22:33.362104 containerd[1597]: 2024-10-08 20:22:33.326 [INFO][5326] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="34fdef20a54d7ab59fcd701ff75013b89f11089c2d3ea96cebe455d87fa7afc9" iface="eth0" netns="" Oct 8 20:22:33.362104 containerd[1597]: 2024-10-08 20:22:33.326 [INFO][5326] k8s.go 615: Releasing IP address(es) ContainerID="34fdef20a54d7ab59fcd701ff75013b89f11089c2d3ea96cebe455d87fa7afc9" Oct 8 20:22:33.362104 containerd[1597]: 2024-10-08 20:22:33.326 [INFO][5326] utils.go 188: Calico CNI releasing IP address ContainerID="34fdef20a54d7ab59fcd701ff75013b89f11089c2d3ea96cebe455d87fa7afc9" Oct 8 20:22:33.362104 containerd[1597]: 2024-10-08 20:22:33.348 [INFO][5332] ipam_plugin.go 417: Releasing address using handleID ContainerID="34fdef20a54d7ab59fcd701ff75013b89f11089c2d3ea96cebe455d87fa7afc9" HandleID="k8s-pod-network.34fdef20a54d7ab59fcd701ff75013b89f11089c2d3ea96cebe455d87fa7afc9" Workload="ci--4081--1--0--6--0b75032dd1.novalocal-k8s-calico--kube--controllers--6644949fbd--28g75-eth0" Oct 8 20:22:33.362104 containerd[1597]: 2024-10-08 20:22:33.348 [INFO][5332] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:22:33.362104 containerd[1597]: 2024-10-08 20:22:33.348 [INFO][5332] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:22:33.362104 containerd[1597]: 2024-10-08 20:22:33.355 [WARNING][5332] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="34fdef20a54d7ab59fcd701ff75013b89f11089c2d3ea96cebe455d87fa7afc9" HandleID="k8s-pod-network.34fdef20a54d7ab59fcd701ff75013b89f11089c2d3ea96cebe455d87fa7afc9" Workload="ci--4081--1--0--6--0b75032dd1.novalocal-k8s-calico--kube--controllers--6644949fbd--28g75-eth0" Oct 8 20:22:33.362104 containerd[1597]: 2024-10-08 20:22:33.356 [INFO][5332] ipam_plugin.go 445: Releasing address using workloadID ContainerID="34fdef20a54d7ab59fcd701ff75013b89f11089c2d3ea96cebe455d87fa7afc9" HandleID="k8s-pod-network.34fdef20a54d7ab59fcd701ff75013b89f11089c2d3ea96cebe455d87fa7afc9" Workload="ci--4081--1--0--6--0b75032dd1.novalocal-k8s-calico--kube--controllers--6644949fbd--28g75-eth0" Oct 8 20:22:33.362104 containerd[1597]: 2024-10-08 20:22:33.358 [INFO][5332] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:22:33.362104 containerd[1597]: 2024-10-08 20:22:33.359 [INFO][5326] k8s.go 621: Teardown processing complete. ContainerID="34fdef20a54d7ab59fcd701ff75013b89f11089c2d3ea96cebe455d87fa7afc9" Oct 8 20:22:33.362104 containerd[1597]: time="2024-10-08T20:22:33.361663726Z" level=info msg="TearDown network for sandbox \"34fdef20a54d7ab59fcd701ff75013b89f11089c2d3ea96cebe455d87fa7afc9\" successfully" Oct 8 20:22:33.369354 containerd[1597]: time="2024-10-08T20:22:33.369091074Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"34fdef20a54d7ab59fcd701ff75013b89f11089c2d3ea96cebe455d87fa7afc9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 20:22:33.369354 containerd[1597]: time="2024-10-08T20:22:33.369191923Z" level=info msg="RemovePodSandbox \"34fdef20a54d7ab59fcd701ff75013b89f11089c2d3ea96cebe455d87fa7afc9\" returns successfully" Oct 8 20:22:35.773473 systemd[1]: Started sshd@17-172.24.4.55:22-172.24.4.1:38622.service - OpenSSH per-connection server daemon (172.24.4.1:38622). Oct 8 20:22:36.891402 sshd[5340]: Accepted publickey for core from 172.24.4.1 port 38622 ssh2: RSA SHA256:N4tAxOYyt600zP8LzVHN9krjQqk3csZTCmZq/eMm2uA Oct 8 20:22:36.894646 sshd[5340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:22:36.905003 systemd-logind[1572]: New session 20 of user core. Oct 8 20:22:36.915565 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 8 20:22:37.806434 sshd[5340]: pam_unix(sshd:session): session closed for user core Oct 8 20:22:37.815568 systemd[1]: sshd@17-172.24.4.55:22-172.24.4.1:38622.service: Deactivated successfully. Oct 8 20:22:37.825473 systemd[1]: session-20.scope: Deactivated successfully. Oct 8 20:22:37.827181 systemd-logind[1572]: Session 20 logged out. Waiting for processes to exit. Oct 8 20:22:37.829764 systemd-logind[1572]: Removed session 20. Oct 8 20:22:42.818457 systemd[1]: Started sshd@18-172.24.4.55:22-172.24.4.1:38632.service - OpenSSH per-connection server daemon (172.24.4.1:38632). Oct 8 20:22:43.947552 sshd[5384]: Accepted publickey for core from 172.24.4.1 port 38632 ssh2: RSA SHA256:N4tAxOYyt600zP8LzVHN9krjQqk3csZTCmZq/eMm2uA Oct 8 20:22:43.968497 sshd[5384]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:22:43.991426 systemd-logind[1572]: New session 21 of user core. Oct 8 20:22:44.000652 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 8 20:22:44.962940 sshd[5384]: pam_unix(sshd:session): session closed for user core Oct 8 20:22:44.969505 systemd[1]: sshd@18-172.24.4.55:22-172.24.4.1:38632.service: Deactivated successfully. Oct 8 20:22:44.972903 systemd-logind[1572]: Session 21 logged out. Waiting for processes to exit. Oct 8 20:22:44.973894 systemd[1]: session-21.scope: Deactivated successfully. Oct 8 20:22:44.975201 systemd-logind[1572]: Removed session 21. Oct 8 20:22:49.972211 systemd[1]: Started sshd@19-172.24.4.55:22-172.24.4.1:44730.service - OpenSSH per-connection server daemon (172.24.4.1:44730). Oct 8 20:22:51.303176 sshd[5408]: Accepted publickey for core from 172.24.4.1 port 44730 ssh2: RSA SHA256:N4tAxOYyt600zP8LzVHN9krjQqk3csZTCmZq/eMm2uA Oct 8 20:22:51.317311 sshd[5408]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:22:51.326219 systemd-logind[1572]: New session 22 of user core. Oct 8 20:22:51.334912 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 8 20:22:52.223133 sshd[5408]: pam_unix(sshd:session): session closed for user core Oct 8 20:22:52.233360 systemd[1]: Started sshd@20-172.24.4.55:22-172.24.4.1:44742.service - OpenSSH per-connection server daemon (172.24.4.1:44742). Oct 8 20:22:52.238306 systemd[1]: sshd@19-172.24.4.55:22-172.24.4.1:44730.service: Deactivated successfully. Oct 8 20:22:52.251350 systemd[1]: session-22.scope: Deactivated successfully. Oct 8 20:22:52.253941 systemd-logind[1572]: Session 22 logged out. Waiting for processes to exit. Oct 8 20:22:52.258380 systemd-logind[1572]: Removed session 22. Oct 8 20:22:53.621867 sshd[5439]: Accepted publickey for core from 172.24.4.1 port 44742 ssh2: RSA SHA256:N4tAxOYyt600zP8LzVHN9krjQqk3csZTCmZq/eMm2uA Oct 8 20:22:53.625191 sshd[5439]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:22:53.637245 systemd-logind[1572]: New session 23 of user core. Oct 8 20:22:53.655871 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 8 20:22:54.801429 sshd[5439]: pam_unix(sshd:session): session closed for user core Oct 8 20:22:54.811473 systemd[1]: Started sshd@21-172.24.4.55:22-172.24.4.1:47014.service - OpenSSH per-connection server daemon (172.24.4.1:47014). Oct 8 20:22:54.814254 systemd[1]: sshd@20-172.24.4.55:22-172.24.4.1:44742.service: Deactivated successfully. Oct 8 20:22:54.819881 systemd-logind[1572]: Session 23 logged out. Waiting for processes to exit. Oct 8 20:22:54.821855 systemd[1]: session-23.scope: Deactivated successfully. Oct 8 20:22:54.825702 systemd-logind[1572]: Removed session 23. Oct 8 20:22:55.975836 sshd[5457]: Accepted publickey for core from 172.24.4.1 port 47014 ssh2: RSA SHA256:N4tAxOYyt600zP8LzVHN9krjQqk3csZTCmZq/eMm2uA Oct 8 20:22:55.985848 sshd[5457]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:22:56.005220 systemd-logind[1572]: New session 24 of user core. Oct 8 20:22:56.011167 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 8 20:22:59.072593 sshd[5457]: pam_unix(sshd:session): session closed for user core Oct 8 20:22:59.080014 systemd[1]: Started sshd@22-172.24.4.55:22-172.24.4.1:47024.service - OpenSSH per-connection server daemon (172.24.4.1:47024). Oct 8 20:22:59.089791 systemd[1]: sshd@21-172.24.4.55:22-172.24.4.1:47014.service: Deactivated successfully. Oct 8 20:22:59.096616 systemd-logind[1572]: Session 24 logged out. Waiting for processes to exit. Oct 8 20:22:59.097019 systemd[1]: session-24.scope: Deactivated successfully. Oct 8 20:22:59.099590 systemd-logind[1572]: Removed session 24. Oct 8 20:23:00.300412 systemd-journald[1119]: Under memory pressure, flushing caches. Oct 8 20:23:00.297725 systemd-resolved[1471]: Under memory pressure, flushing caches. Oct 8 20:23:00.297759 systemd-resolved[1471]: Flushed all caches. Oct 8 20:23:00.376180 sshd[5503]: Accepted publickey for core from 172.24.4.1 port 47024 ssh2: RSA SHA256:N4tAxOYyt600zP8LzVHN9krjQqk3csZTCmZq/eMm2uA Oct 8 20:23:00.379421 sshd[5503]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:23:00.393022 systemd-logind[1572]: New session 25 of user core. Oct 8 20:23:00.398856 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 8 20:23:02.347739 systemd-journald[1119]: Under memory pressure, flushing caches. Oct 8 20:23:02.345609 systemd-resolved[1471]: Under memory pressure, flushing caches. Oct 8 20:23:02.345622 systemd-resolved[1471]: Flushed all caches. Oct 8 20:23:02.920990 sshd[5503]: pam_unix(sshd:session): session closed for user core Oct 8 20:23:02.929291 systemd[1]: Started sshd@23-172.24.4.55:22-172.24.4.1:47034.service - OpenSSH per-connection server daemon (172.24.4.1:47034). Oct 8 20:23:02.932709 systemd[1]: sshd@22-172.24.4.55:22-172.24.4.1:47024.service: Deactivated successfully. Oct 8 20:23:02.944458 systemd[1]: session-25.scope: Deactivated successfully. Oct 8 20:23:02.946540 systemd-logind[1572]: Session 25 logged out. Waiting for processes to exit. Oct 8 20:23:02.948697 systemd-logind[1572]: Removed session 25. Oct 8 20:23:04.305039 sshd[5517]: Accepted publickey for core from 172.24.4.1 port 47034 ssh2: RSA SHA256:N4tAxOYyt600zP8LzVHN9krjQqk3csZTCmZq/eMm2uA Oct 8 20:23:04.319019 sshd[5517]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:23:04.329034 systemd-logind[1572]: New session 26 of user core. Oct 8 20:23:04.336327 systemd[1]: Started session-26.scope - Session 26 of User core. Oct 8 20:23:04.395226 systemd-journald[1119]: Under memory pressure, flushing caches. Oct 8 20:23:04.393299 systemd-resolved[1471]: Under memory pressure, flushing caches. Oct 8 20:23:04.393311 systemd-resolved[1471]: Flushed all caches. Oct 8 20:23:05.323325 sshd[5517]: pam_unix(sshd:session): session closed for user core Oct 8 20:23:05.327898 systemd-logind[1572]: Session 26 logged out. Waiting for processes to exit. Oct 8 20:23:05.328208 systemd[1]: sshd@23-172.24.4.55:22-172.24.4.1:47034.service: Deactivated successfully. Oct 8 20:23:05.335640 systemd[1]: session-26.scope: Deactivated successfully. Oct 8 20:23:05.338122 systemd-logind[1572]: Removed session 26. Oct 8 20:23:06.444580 systemd-journald[1119]: Under memory pressure, flushing caches. Oct 8 20:23:06.441439 systemd-resolved[1471]: Under memory pressure, flushing caches. Oct 8 20:23:06.442113 systemd-resolved[1471]: Flushed all caches. Oct 8 20:23:10.336020 systemd[1]: Started sshd@24-172.24.4.55:22-172.24.4.1:37898.service - OpenSSH per-connection server daemon (172.24.4.1:37898). Oct 8 20:23:11.683315 sshd[5575]: Accepted publickey for core from 172.24.4.1 port 37898 ssh2: RSA SHA256:N4tAxOYyt600zP8LzVHN9krjQqk3csZTCmZq/eMm2uA Oct 8 20:23:11.686499 sshd[5575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:23:11.696905 systemd-logind[1572]: New session 27 of user core. Oct 8 20:23:11.705067 systemd[1]: Started session-27.scope - Session 27 of User core. Oct 8 20:23:12.450286 sshd[5575]: pam_unix(sshd:session): session closed for user core Oct 8 20:23:12.454702 systemd[1]: sshd@24-172.24.4.55:22-172.24.4.1:37898.service: Deactivated successfully. Oct 8 20:23:12.457787 systemd[1]: session-27.scope: Deactivated successfully. Oct 8 20:23:12.458246 systemd-logind[1572]: Session 27 logged out. Waiting for processes to exit. Oct 8 20:23:12.459848 systemd-logind[1572]: Removed session 27. Oct 8 20:23:17.466430 systemd[1]: Started sshd@25-172.24.4.55:22-172.24.4.1:35354.service - OpenSSH per-connection server daemon (172.24.4.1:35354). Oct 8 20:23:18.818307 sshd[5591]: Accepted publickey for core from 172.24.4.1 port 35354 ssh2: RSA SHA256:N4tAxOYyt600zP8LzVHN9krjQqk3csZTCmZq/eMm2uA Oct 8 20:23:18.821632 sshd[5591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:23:18.833886 systemd-logind[1572]: New session 28 of user core. Oct 8 20:23:18.839325 systemd[1]: Started session-28.scope - Session 28 of User core. Oct 8 20:23:19.805520 sshd[5591]: pam_unix(sshd:session): session closed for user core Oct 8 20:23:19.809840 systemd[1]: sshd@25-172.24.4.55:22-172.24.4.1:35354.service: Deactivated successfully. Oct 8 20:23:19.816138 systemd-logind[1572]: Session 28 logged out. Waiting for processes to exit. Oct 8 20:23:19.816463 systemd[1]: session-28.scope: Deactivated successfully. Oct 8 20:23:19.818918 systemd-logind[1572]: Removed session 28. Oct 8 20:23:24.815368 systemd[1]: Started sshd@26-172.24.4.55:22-172.24.4.1:40656.service - OpenSSH per-connection server daemon (172.24.4.1:40656). Oct 8 20:23:25.984601 sshd[5632]: Accepted publickey for core from 172.24.4.1 port 40656 ssh2: RSA SHA256:N4tAxOYyt600zP8LzVHN9krjQqk3csZTCmZq/eMm2uA Oct 8 20:23:25.986981 sshd[5632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:23:25.996631 systemd-logind[1572]: New session 29 of user core. Oct 8 20:23:26.006515 systemd[1]: Started session-29.scope - Session 29 of User core. Oct 8 20:23:26.793402 sshd[5632]: pam_unix(sshd:session): session closed for user core Oct 8 20:23:26.799700 systemd[1]: sshd@26-172.24.4.55:22-172.24.4.1:40656.service: Deactivated successfully. Oct 8 20:23:26.808612 systemd-logind[1572]: Session 29 logged out. Waiting for processes to exit. Oct 8 20:23:26.809858 systemd[1]: session-29.scope: Deactivated successfully. Oct 8 20:23:26.812507 systemd-logind[1572]: Removed session 29. Oct 8 20:23:31.805830 systemd[1]: Started sshd@27-172.24.4.55:22-172.24.4.1:40672.service - OpenSSH per-connection server daemon (172.24.4.1:40672). Oct 8 20:23:33.223197 sshd[5652]: Accepted publickey for core from 172.24.4.1 port 40672 ssh2: RSA SHA256:N4tAxOYyt600zP8LzVHN9krjQqk3csZTCmZq/eMm2uA Oct 8 20:23:33.224698 sshd[5652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:23:33.235173 systemd-logind[1572]: New session 30 of user core. Oct 8 20:23:33.244007 systemd[1]: Started session-30.scope - Session 30 of User core. Oct 8 20:23:34.073179 sshd[5652]: pam_unix(sshd:session): session closed for user core Oct 8 20:23:34.081573 systemd[1]: sshd@27-172.24.4.55:22-172.24.4.1:40672.service: Deactivated successfully. Oct 8 20:23:34.086735 systemd-logind[1572]: Session 30 logged out. Waiting for processes to exit. Oct 8 20:23:34.087186 systemd[1]: session-30.scope: Deactivated successfully. Oct 8 20:23:34.091674 systemd-logind[1572]: Removed session 30.