Sep 4 17:56:08.982502 kernel: Linux version 6.6.48-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Sep 4 15:54:07 -00 2024 Sep 4 17:56:08.982556 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=ceda2dd706627da8006bcd6ae77ea155b2a7de6732e2c1c7ab4bed271400663d Sep 4 17:56:08.982570 kernel: BIOS-provided physical RAM map: Sep 4 17:56:08.982578 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 4 17:56:08.982586 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 4 17:56:08.982593 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 4 17:56:08.982603 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Sep 4 17:56:08.982611 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Sep 4 17:56:08.982618 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 4 17:56:08.982629 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 4 17:56:08.982637 kernel: NX (Execute Disable) protection: active Sep 4 17:56:08.982646 kernel: APIC: Static calls initialized Sep 4 17:56:08.982654 kernel: SMBIOS 2.8 present. Sep 4 17:56:08.982662 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014 Sep 4 17:56:08.982671 kernel: Hypervisor detected: KVM Sep 4 17:56:08.982681 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 4 17:56:08.982689 kernel: kvm-clock: using sched offset of 4049985425 cycles Sep 4 17:56:08.982697 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 4 17:56:08.982705 kernel: tsc: Detected 1996.249 MHz processor Sep 4 17:56:08.982713 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 4 17:56:08.982722 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 4 17:56:08.982730 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Sep 4 17:56:08.982738 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 4 17:56:08.982746 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 4 17:56:08.982756 kernel: ACPI: Early table checksum verification disabled Sep 4 17:56:08.982764 kernel: ACPI: RSDP 0x00000000000F5930 000014 (v00 BOCHS ) Sep 4 17:56:08.982772 kernel: ACPI: RSDT 0x000000007FFE1848 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:56:08.982780 kernel: ACPI: FACP 0x000000007FFE172C 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:56:08.982788 kernel: ACPI: DSDT 0x000000007FFE0040 0016EC (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:56:08.982796 kernel: ACPI: FACS 0x000000007FFE0000 000040 Sep 4 17:56:08.982804 kernel: ACPI: APIC 0x000000007FFE17A0 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:56:08.982812 kernel: ACPI: WAET 0x000000007FFE1820 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:56:08.982819 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe172c-0x7ffe179f] Sep 4 17:56:08.982830 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe172b] Sep 4 17:56:08.982838 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Sep 4 17:56:08.982845 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17a0-0x7ffe181f] Sep 4 17:56:08.982853 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe1820-0x7ffe1847] Sep 4 17:56:08.982861 kernel: No NUMA configuration found Sep 4 17:56:08.982869 kernel: Faking a node at [mem 0x0000000000000000-0x000000007ffdcfff] Sep 4 17:56:08.982877 kernel: NODE_DATA(0) allocated [mem 0x7ffd7000-0x7ffdcfff] Sep 4 17:56:08.982888 kernel: Zone ranges: Sep 4 17:56:08.982897 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 4 17:56:08.982906 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdcfff] Sep 4 17:56:08.982914 kernel: Normal empty Sep 4 17:56:08.982922 kernel: Movable zone start for each node Sep 4 17:56:08.982930 kernel: Early memory node ranges Sep 4 17:56:08.982938 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 4 17:56:08.982947 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Sep 4 17:56:08.982957 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdcfff] Sep 4 17:56:08.982965 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 4 17:56:08.982973 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 4 17:56:08.982981 kernel: On node 0, zone DMA32: 35 pages in unavailable ranges Sep 4 17:56:08.982989 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 4 17:56:08.982998 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 4 17:56:08.983006 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 4 17:56:08.983014 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 4 17:56:08.983022 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 4 17:56:08.983032 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 4 17:56:08.983040 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 4 17:56:08.983049 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 4 17:56:08.983057 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 4 17:56:08.983065 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Sep 4 17:56:08.983073 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 4 17:56:08.983082 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Sep 4 17:56:08.983090 kernel: Booting paravirtualized kernel on KVM Sep 4 17:56:08.983098 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 4 17:56:08.983109 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 4 17:56:08.983117 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576 Sep 4 17:56:08.983125 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152 Sep 4 17:56:08.983133 kernel: pcpu-alloc: [0] 0 1 Sep 4 17:56:08.983141 kernel: kvm-guest: PV spinlocks disabled, no host support Sep 4 17:56:08.983151 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=ceda2dd706627da8006bcd6ae77ea155b2a7de6732e2c1c7ab4bed271400663d Sep 4 17:56:08.983160 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 4 17:56:08.983168 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 4 17:56:08.983178 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 4 17:56:08.983187 kernel: Fallback order for Node 0: 0 Sep 4 17:56:08.983195 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515805 Sep 4 17:56:08.983203 kernel: Policy zone: DMA32 Sep 4 17:56:08.983211 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 4 17:56:08.983220 kernel: Memory: 1971208K/2096620K available (12288K kernel code, 2304K rwdata, 22708K rodata, 42704K init, 2488K bss, 125152K reserved, 0K cma-reserved) Sep 4 17:56:08.983228 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 4 17:56:08.987520 kernel: ftrace: allocating 37748 entries in 148 pages Sep 4 17:56:08.987551 kernel: ftrace: allocated 148 pages with 3 groups Sep 4 17:56:08.987560 kernel: Dynamic Preempt: voluntary Sep 4 17:56:08.987569 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 4 17:56:08.987578 kernel: rcu: RCU event tracing is enabled. Sep 4 17:56:08.987587 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 4 17:56:08.987595 kernel: Trampoline variant of Tasks RCU enabled. Sep 4 17:56:08.987604 kernel: Rude variant of Tasks RCU enabled. Sep 4 17:56:08.987612 kernel: Tracing variant of Tasks RCU enabled. Sep 4 17:56:08.987620 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 4 17:56:08.987629 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 4 17:56:08.987640 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Sep 4 17:56:08.987648 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 4 17:56:08.987656 kernel: Console: colour VGA+ 80x25 Sep 4 17:56:08.987665 kernel: printk: console [tty0] enabled Sep 4 17:56:08.987673 kernel: printk: console [ttyS0] enabled Sep 4 17:56:08.987681 kernel: ACPI: Core revision 20230628 Sep 4 17:56:08.987690 kernel: APIC: Switch to symmetric I/O mode setup Sep 4 17:56:08.987698 kernel: x2apic enabled Sep 4 17:56:08.987706 kernel: APIC: Switched APIC routing to: physical x2apic Sep 4 17:56:08.987717 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 4 17:56:08.987725 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Sep 4 17:56:08.987734 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Sep 4 17:56:08.987742 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Sep 4 17:56:08.987750 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Sep 4 17:56:08.987759 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 4 17:56:08.987767 kernel: Spectre V2 : Mitigation: Retpolines Sep 4 17:56:08.987775 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Sep 4 17:56:08.987784 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Sep 4 17:56:08.987795 kernel: Speculative Store Bypass: Vulnerable Sep 4 17:56:08.987803 kernel: x86/fpu: x87 FPU will use FXSAVE Sep 4 17:56:08.987811 kernel: Freeing SMP alternatives memory: 32K Sep 4 17:56:08.987820 kernel: pid_max: default: 32768 minimum: 301 Sep 4 17:56:08.987828 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 4 17:56:08.987836 kernel: landlock: Up and running. Sep 4 17:56:08.987844 kernel: SELinux: Initializing. Sep 4 17:56:08.987853 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 4 17:56:08.987869 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 4 17:56:08.987878 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Sep 4 17:56:08.987887 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:56:08.987895 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:56:08.987906 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:56:08.987914 kernel: Performance Events: AMD PMU driver. Sep 4 17:56:08.987923 kernel: ... version: 0 Sep 4 17:56:08.987932 kernel: ... bit width: 48 Sep 4 17:56:08.987941 kernel: ... generic registers: 4 Sep 4 17:56:08.987951 kernel: ... value mask: 0000ffffffffffff Sep 4 17:56:08.987960 kernel: ... max period: 00007fffffffffff Sep 4 17:56:08.987968 kernel: ... fixed-purpose events: 0 Sep 4 17:56:08.987977 kernel: ... event mask: 000000000000000f Sep 4 17:56:08.987986 kernel: signal: max sigframe size: 1440 Sep 4 17:56:08.987994 kernel: rcu: Hierarchical SRCU implementation. Sep 4 17:56:08.988003 kernel: rcu: Max phase no-delay instances is 400. Sep 4 17:56:08.988012 kernel: smp: Bringing up secondary CPUs ... Sep 4 17:56:08.988021 kernel: smpboot: x86: Booting SMP configuration: Sep 4 17:56:08.988032 kernel: .... node #0, CPUs: #1 Sep 4 17:56:08.988040 kernel: smp: Brought up 1 node, 2 CPUs Sep 4 17:56:08.988049 kernel: smpboot: Max logical packages: 2 Sep 4 17:56:08.988058 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Sep 4 17:56:08.988067 kernel: devtmpfs: initialized Sep 4 17:56:08.988075 kernel: x86/mm: Memory block size: 128MB Sep 4 17:56:08.988084 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 4 17:56:08.988093 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 4 17:56:08.988102 kernel: pinctrl core: initialized pinctrl subsystem Sep 4 17:56:08.988112 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 4 17:56:08.988121 kernel: audit: initializing netlink subsys (disabled) Sep 4 17:56:08.988130 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 4 17:56:08.988138 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 4 17:56:08.988147 kernel: audit: type=2000 audit(1725472567.907:1): state=initialized audit_enabled=0 res=1 Sep 4 17:56:08.988156 kernel: cpuidle: using governor menu Sep 4 17:56:08.988165 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 4 17:56:08.988173 kernel: dca service started, version 1.12.1 Sep 4 17:56:08.988182 kernel: PCI: Using configuration type 1 for base access Sep 4 17:56:08.988193 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 4 17:56:08.988201 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 4 17:56:08.988210 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 4 17:56:08.988219 kernel: ACPI: Added _OSI(Module Device) Sep 4 17:56:08.988228 kernel: ACPI: Added _OSI(Processor Device) Sep 4 17:56:08.989253 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Sep 4 17:56:08.989267 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 4 17:56:08.989275 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 4 17:56:08.989284 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 4 17:56:08.989297 kernel: ACPI: Interpreter enabled Sep 4 17:56:08.989318 kernel: ACPI: PM: (supports S0 S3 S5) Sep 4 17:56:08.989327 kernel: ACPI: Using IOAPIC for interrupt routing Sep 4 17:56:08.989336 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 4 17:56:08.989345 kernel: PCI: Using E820 reservations for host bridge windows Sep 4 17:56:08.989354 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Sep 4 17:56:08.989363 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 4 17:56:08.989541 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Sep 4 17:56:08.989651 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Sep 4 17:56:08.989747 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Sep 4 17:56:08.989762 kernel: acpiphp: Slot [3] registered Sep 4 17:56:08.989771 kernel: acpiphp: Slot [4] registered Sep 4 17:56:08.989781 kernel: acpiphp: Slot [5] registered Sep 4 17:56:08.989790 kernel: acpiphp: Slot [6] registered Sep 4 17:56:08.989799 kernel: acpiphp: Slot [7] registered Sep 4 17:56:08.989808 kernel: acpiphp: Slot [8] registered Sep 4 17:56:08.989821 kernel: acpiphp: Slot [9] registered Sep 4 17:56:08.989830 kernel: acpiphp: Slot [10] registered Sep 4 17:56:08.989840 kernel: acpiphp: Slot [11] registered Sep 4 17:56:08.989849 kernel: acpiphp: Slot [12] registered Sep 4 17:56:08.989858 kernel: acpiphp: Slot [13] registered Sep 4 17:56:08.989867 kernel: acpiphp: Slot [14] registered Sep 4 17:56:08.989876 kernel: acpiphp: Slot [15] registered Sep 4 17:56:08.989885 kernel: acpiphp: Slot [16] registered Sep 4 17:56:08.989895 kernel: acpiphp: Slot [17] registered Sep 4 17:56:08.989904 kernel: acpiphp: Slot [18] registered Sep 4 17:56:08.989915 kernel: acpiphp: Slot [19] registered Sep 4 17:56:08.989924 kernel: acpiphp: Slot [20] registered Sep 4 17:56:08.989934 kernel: acpiphp: Slot [21] registered Sep 4 17:56:08.989943 kernel: acpiphp: Slot [22] registered Sep 4 17:56:08.989952 kernel: acpiphp: Slot [23] registered Sep 4 17:56:08.989971 kernel: acpiphp: Slot [24] registered Sep 4 17:56:08.989980 kernel: acpiphp: Slot [25] registered Sep 4 17:56:08.989990 kernel: acpiphp: Slot [26] registered Sep 4 17:56:08.989999 kernel: acpiphp: Slot [27] registered Sep 4 17:56:08.990011 kernel: acpiphp: Slot [28] registered Sep 4 17:56:08.990020 kernel: acpiphp: Slot [29] registered Sep 4 17:56:08.990029 kernel: acpiphp: Slot [30] registered Sep 4 17:56:08.990038 kernel: acpiphp: Slot [31] registered Sep 4 17:56:08.990047 kernel: PCI host bridge to bus 0000:00 Sep 4 17:56:08.990153 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 4 17:56:08.992287 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 4 17:56:08.992385 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 4 17:56:08.993622 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Sep 4 17:56:08.993879 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Sep 4 17:56:08.994013 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 4 17:56:08.994214 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Sep 4 17:56:08.996500 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Sep 4 17:56:08.996705 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Sep 4 17:56:08.996914 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Sep 4 17:56:08.997031 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Sep 4 17:56:08.997137 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Sep 4 17:56:08.997280 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Sep 4 17:56:08.997425 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Sep 4 17:56:08.997539 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Sep 4 17:56:08.997638 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Sep 4 17:56:08.997740 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Sep 4 17:56:08.997855 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Sep 4 17:56:08.997956 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Sep 4 17:56:08.998063 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Sep 4 17:56:08.998161 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Sep 4 17:56:09.000310 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Sep 4 17:56:09.000431 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 4 17:56:09.000549 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Sep 4 17:56:09.000648 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Sep 4 17:56:09.000748 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Sep 4 17:56:09.000846 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Sep 4 17:56:09.000943 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Sep 4 17:56:09.001050 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Sep 4 17:56:09.001466 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Sep 4 17:56:09.001835 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Sep 4 17:56:09.002058 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Sep 4 17:56:09.002168 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Sep 4 17:56:09.004616 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Sep 4 17:56:09.004835 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Sep 4 17:56:09.004945 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Sep 4 17:56:09.005043 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Sep 4 17:56:09.005185 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Sep 4 17:56:09.005223 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 4 17:56:09.005234 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 4 17:56:09.005354 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 4 17:56:09.005365 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 4 17:56:09.005374 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Sep 4 17:56:09.005384 kernel: iommu: Default domain type: Translated Sep 4 17:56:09.005401 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 4 17:56:09.005429 kernel: PCI: Using ACPI for IRQ routing Sep 4 17:56:09.005439 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 4 17:56:09.005449 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 4 17:56:09.005458 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Sep 4 17:56:09.005567 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Sep 4 17:56:09.005663 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Sep 4 17:56:09.005757 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 4 17:56:09.005771 kernel: vgaarb: loaded Sep 4 17:56:09.005781 kernel: clocksource: Switched to clocksource kvm-clock Sep 4 17:56:09.005795 kernel: VFS: Disk quotas dquot_6.6.0 Sep 4 17:56:09.005804 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 4 17:56:09.005814 kernel: pnp: PnP ACPI init Sep 4 17:56:09.005912 kernel: pnp 00:03: [dma 2] Sep 4 17:56:09.005927 kernel: pnp: PnP ACPI: found 5 devices Sep 4 17:56:09.005937 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 4 17:56:09.005954 kernel: NET: Registered PF_INET protocol family Sep 4 17:56:09.005963 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 4 17:56:09.005976 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 4 17:56:09.005986 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 4 17:56:09.005995 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 4 17:56:09.006005 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 4 17:56:09.006014 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 4 17:56:09.006024 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 4 17:56:09.006035 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 4 17:56:09.006045 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 4 17:56:09.006055 kernel: NET: Registered PF_XDP protocol family Sep 4 17:56:09.006156 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 4 17:56:09.007949 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 4 17:56:09.008330 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 4 17:56:09.008442 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Sep 4 17:56:09.008541 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Sep 4 17:56:09.008695 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Sep 4 17:56:09.008814 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 4 17:56:09.008832 kernel: PCI: CLS 0 bytes, default 64 Sep 4 17:56:09.008864 kernel: Initialise system trusted keyrings Sep 4 17:56:09.008876 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 4 17:56:09.008887 kernel: Key type asymmetric registered Sep 4 17:56:09.008898 kernel: Asymmetric key parser 'x509' registered Sep 4 17:56:09.008909 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 4 17:56:09.008920 kernel: io scheduler mq-deadline registered Sep 4 17:56:09.008931 kernel: io scheduler kyber registered Sep 4 17:56:09.008942 kernel: io scheduler bfq registered Sep 4 17:56:09.008953 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 4 17:56:09.008975 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Sep 4 17:56:09.008987 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Sep 4 17:56:09.008998 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Sep 4 17:56:09.009009 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Sep 4 17:56:09.009020 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 4 17:56:09.009031 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 4 17:56:09.009043 kernel: random: crng init done Sep 4 17:56:09.009053 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 4 17:56:09.009064 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 4 17:56:09.009078 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 4 17:56:09.009215 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 4 17:56:09.009254 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 4 17:56:09.009381 kernel: rtc_cmos 00:04: registered as rtc0 Sep 4 17:56:09.009491 kernel: rtc_cmos 00:04: setting system clock to 2024-09-04T17:56:08 UTC (1725472568) Sep 4 17:56:09.009588 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Sep 4 17:56:09.009604 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 4 17:56:09.009616 kernel: NET: Registered PF_INET6 protocol family Sep 4 17:56:09.009631 kernel: Segment Routing with IPv6 Sep 4 17:56:09.009642 kernel: In-situ OAM (IOAM) with IPv6 Sep 4 17:56:09.009653 kernel: NET: Registered PF_PACKET protocol family Sep 4 17:56:09.009664 kernel: Key type dns_resolver registered Sep 4 17:56:09.009675 kernel: IPI shorthand broadcast: enabled Sep 4 17:56:09.009686 kernel: sched_clock: Marking stable (965029813, 125099692)->(1093893307, -3763802) Sep 4 17:56:09.009697 kernel: registered taskstats version 1 Sep 4 17:56:09.009707 kernel: Loading compiled-in X.509 certificates Sep 4 17:56:09.009719 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.48-flatcar: 8669771ab5e11f458b79e6634fe685dacc266b18' Sep 4 17:56:09.009732 kernel: Key type .fscrypt registered Sep 4 17:56:09.009743 kernel: Key type fscrypt-provisioning registered Sep 4 17:56:09.009754 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 4 17:56:09.009766 kernel: ima: Allocated hash algorithm: sha1 Sep 4 17:56:09.009776 kernel: ima: No architecture policies found Sep 4 17:56:09.009787 kernel: clk: Disabling unused clocks Sep 4 17:56:09.009798 kernel: Freeing unused kernel image (initmem) memory: 42704K Sep 4 17:56:09.009808 kernel: Write protecting the kernel read-only data: 36864k Sep 4 17:56:09.009821 kernel: Freeing unused kernel image (rodata/data gap) memory: 1868K Sep 4 17:56:09.009832 kernel: Run /init as init process Sep 4 17:56:09.009843 kernel: with arguments: Sep 4 17:56:09.009854 kernel: /init Sep 4 17:56:09.009865 kernel: with environment: Sep 4 17:56:09.009875 kernel: HOME=/ Sep 4 17:56:09.009886 kernel: TERM=linux Sep 4 17:56:09.009896 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 4 17:56:09.009918 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 4 17:56:09.009936 systemd[1]: Detected virtualization kvm. Sep 4 17:56:09.009948 systemd[1]: Detected architecture x86-64. Sep 4 17:56:09.009959 systemd[1]: Running in initrd. Sep 4 17:56:09.009971 systemd[1]: No hostname configured, using default hostname. Sep 4 17:56:09.009982 systemd[1]: Hostname set to . Sep 4 17:56:09.009994 systemd[1]: Initializing machine ID from VM UUID. Sep 4 17:56:09.010006 systemd[1]: Queued start job for default target initrd.target. Sep 4 17:56:09.010020 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:56:09.010033 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:56:09.010049 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 4 17:56:09.010062 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 17:56:09.010074 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 4 17:56:09.010086 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 4 17:56:09.010100 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 4 17:56:09.010115 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 4 17:56:09.010126 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:56:09.010139 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:56:09.010151 systemd[1]: Reached target paths.target - Path Units. Sep 4 17:56:09.010174 systemd[1]: Reached target slices.target - Slice Units. Sep 4 17:56:09.010187 systemd[1]: Reached target swap.target - Swaps. Sep 4 17:56:09.010201 systemd[1]: Reached target timers.target - Timer Units. Sep 4 17:56:09.010213 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 17:56:09.010226 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 17:56:09.010259 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 4 17:56:09.010273 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 4 17:56:09.010285 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:56:09.010297 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 17:56:09.010309 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:56:09.010321 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 17:56:09.010336 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 4 17:56:09.010349 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 17:56:09.010361 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 4 17:56:09.010373 systemd[1]: Starting systemd-fsck-usr.service... Sep 4 17:56:09.010384 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 17:56:09.010396 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 17:56:09.010409 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:56:09.010420 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 4 17:56:09.010436 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:56:09.010448 systemd[1]: Finished systemd-fsck-usr.service. Sep 4 17:56:09.010544 systemd-journald[185]: Collecting audit messages is disabled. Sep 4 17:56:09.010579 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 17:56:09.010592 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 4 17:56:09.010606 kernel: Bridge firewalling registered Sep 4 17:56:09.010627 systemd-journald[185]: Journal started Sep 4 17:56:09.010667 systemd-journald[185]: Runtime Journal (/run/log/journal/aebaa87ea37f4d12a46d1803b0adfa94) is 4.9M, max 39.3M, 34.4M free. Sep 4 17:56:08.963071 systemd-modules-load[186]: Inserted module 'overlay' Sep 4 17:56:09.038871 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 17:56:09.005485 systemd-modules-load[186]: Inserted module 'br_netfilter' Sep 4 17:56:09.041679 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 17:56:09.042421 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:56:09.048338 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:56:09.053433 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:56:09.055024 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 17:56:09.057577 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:56:09.068478 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 17:56:09.069174 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:56:09.077861 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:56:09.083922 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:56:09.085374 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 17:56:09.090526 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 4 17:56:09.094427 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 17:56:09.105393 dracut-cmdline[218]: dracut-dracut-053 Sep 4 17:56:09.110164 dracut-cmdline[218]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=ceda2dd706627da8006bcd6ae77ea155b2a7de6732e2c1c7ab4bed271400663d Sep 4 17:56:09.137478 systemd-resolved[221]: Positive Trust Anchors: Sep 4 17:56:09.138223 systemd-resolved[221]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 17:56:09.138286 systemd-resolved[221]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 17:56:09.144337 systemd-resolved[221]: Defaulting to hostname 'linux'. Sep 4 17:56:09.146519 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 17:56:09.148754 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:56:09.178297 kernel: SCSI subsystem initialized Sep 4 17:56:09.188353 kernel: Loading iSCSI transport class v2.0-870. Sep 4 17:56:09.200422 kernel: iscsi: registered transport (tcp) Sep 4 17:56:09.225673 kernel: iscsi: registered transport (qla4xxx) Sep 4 17:56:09.225766 kernel: QLogic iSCSI HBA Driver Sep 4 17:56:09.297850 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 4 17:56:09.310568 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 4 17:56:09.358401 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 4 17:56:09.359019 kernel: device-mapper: uevent: version 1.0.3 Sep 4 17:56:09.359050 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 4 17:56:09.425385 kernel: raid6: sse2x4 gen() 5084 MB/s Sep 4 17:56:09.443329 kernel: raid6: sse2x2 gen() 6249 MB/s Sep 4 17:56:09.460547 kernel: raid6: sse2x1 gen() 9953 MB/s Sep 4 17:56:09.460686 kernel: raid6: using algorithm sse2x1 gen() 9953 MB/s Sep 4 17:56:09.479604 kernel: raid6: .... xor() 7149 MB/s, rmw enabled Sep 4 17:56:09.479743 kernel: raid6: using ssse3x2 recovery algorithm Sep 4 17:56:09.502309 kernel: xor: measuring software checksum speed Sep 4 17:56:09.504383 kernel: prefetch64-sse : 17623 MB/sec Sep 4 17:56:09.504443 kernel: generic_sse : 16935 MB/sec Sep 4 17:56:09.506006 kernel: xor: using function: prefetch64-sse (17623 MB/sec) Sep 4 17:56:09.687064 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 4 17:56:09.705739 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 4 17:56:09.722636 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:56:09.734753 systemd-udevd[404]: Using default interface naming scheme 'v255'. Sep 4 17:56:09.739399 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:56:09.750534 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 4 17:56:09.777920 dracut-pre-trigger[417]: rd.md=0: removing MD RAID activation Sep 4 17:56:09.826666 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 17:56:09.835527 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 17:56:09.881821 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:56:09.892608 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 4 17:56:09.911414 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 4 17:56:09.931292 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 17:56:09.933606 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:56:09.936799 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 17:56:09.947503 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 4 17:56:09.961879 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 4 17:56:09.983309 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues Sep 4 17:56:09.984829 kernel: virtio_blk virtio2: [vda] 41943040 512-byte logical blocks (21.5 GB/20.0 GiB) Sep 4 17:56:10.004787 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 4 17:56:10.004936 kernel: GPT:17805311 != 41943039 Sep 4 17:56:10.004950 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 4 17:56:10.004961 kernel: GPT:17805311 != 41943039 Sep 4 17:56:10.005412 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 4 17:56:10.007625 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 17:56:10.017878 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 17:56:10.018119 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:56:10.021762 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:56:10.023478 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:56:10.023903 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:56:10.026004 kernel: libata version 3.00 loaded. Sep 4 17:56:10.025493 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:56:10.029405 kernel: ata_piix 0000:00:01.1: version 2.13 Sep 4 17:56:10.031263 kernel: scsi host0: ata_piix Sep 4 17:56:10.032260 kernel: scsi host1: ata_piix Sep 4 17:56:10.034334 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Sep 4 17:56:10.034361 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Sep 4 17:56:10.033718 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:56:10.061356 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (473) Sep 4 17:56:10.077321 kernel: BTRFS: device fsid 0dc40443-7f77-4fa7-b5e4-579d4bba0772 devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (456) Sep 4 17:56:10.084048 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 4 17:56:10.122720 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:56:10.140072 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 4 17:56:10.146191 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 4 17:56:10.151123 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 4 17:56:10.151739 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 4 17:56:10.165594 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 4 17:56:10.171379 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:56:10.180675 disk-uuid[503]: Primary Header is updated. Sep 4 17:56:10.180675 disk-uuid[503]: Secondary Entries is updated. Sep 4 17:56:10.180675 disk-uuid[503]: Secondary Header is updated. Sep 4 17:56:10.191339 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 17:56:10.197456 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 17:56:10.196407 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:56:11.223322 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 17:56:11.224506 disk-uuid[507]: The operation has completed successfully. Sep 4 17:56:11.269446 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 4 17:56:11.269792 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 4 17:56:11.322394 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 4 17:56:11.332437 sh[527]: Success Sep 4 17:56:11.355284 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Sep 4 17:56:11.457202 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 4 17:56:11.477674 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 4 17:56:11.484497 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 4 17:56:11.512320 kernel: BTRFS info (device dm-0): first mount of filesystem 0dc40443-7f77-4fa7-b5e4-579d4bba0772 Sep 4 17:56:11.512412 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:56:11.525843 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 4 17:56:11.529424 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 4 17:56:11.533663 kernel: BTRFS info (device dm-0): using free space tree Sep 4 17:56:11.551091 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 4 17:56:11.553657 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 4 17:56:11.559559 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 4 17:56:11.570896 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 4 17:56:11.594081 kernel: BTRFS info (device vda6): first mount of filesystem b2463ce1-c756-4e78-b7f2-401dad24571d Sep 4 17:56:11.594173 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:56:11.597458 kernel: BTRFS info (device vda6): using free space tree Sep 4 17:56:11.612357 kernel: BTRFS info (device vda6): auto enabling async discard Sep 4 17:56:11.635473 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 4 17:56:11.643467 kernel: BTRFS info (device vda6): last unmount of filesystem b2463ce1-c756-4e78-b7f2-401dad24571d Sep 4 17:56:11.655960 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 4 17:56:11.667643 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 4 17:56:11.744032 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 17:56:11.753486 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 17:56:11.786563 systemd-networkd[709]: lo: Link UP Sep 4 17:56:11.786575 systemd-networkd[709]: lo: Gained carrier Sep 4 17:56:11.787919 systemd-networkd[709]: Enumeration completed Sep 4 17:56:11.788023 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 17:56:11.788621 systemd-networkd[709]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:56:11.788625 systemd-networkd[709]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 17:56:11.789740 systemd-networkd[709]: eth0: Link UP Sep 4 17:56:11.789744 systemd-networkd[709]: eth0: Gained carrier Sep 4 17:56:11.789751 systemd-networkd[709]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:56:11.790440 systemd[1]: Reached target network.target - Network. Sep 4 17:56:11.825000 systemd-networkd[709]: eth0: DHCPv4 address 172.24.4.18/24, gateway 172.24.4.1 acquired from 172.24.4.1 Sep 4 17:56:11.840601 ignition[624]: Ignition 2.19.0 Sep 4 17:56:11.840612 ignition[624]: Stage: fetch-offline Sep 4 17:56:11.840652 ignition[624]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:56:11.842953 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 17:56:11.840662 ignition[624]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 4 17:56:11.840774 ignition[624]: parsed url from cmdline: "" Sep 4 17:56:11.840778 ignition[624]: no config URL provided Sep 4 17:56:11.840783 ignition[624]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 17:56:11.840792 ignition[624]: no config at "/usr/lib/ignition/user.ign" Sep 4 17:56:11.840801 ignition[624]: failed to fetch config: resource requires networking Sep 4 17:56:11.841000 ignition[624]: Ignition finished successfully Sep 4 17:56:11.851890 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 4 17:56:11.865028 ignition[719]: Ignition 2.19.0 Sep 4 17:56:11.865920 ignition[719]: Stage: fetch Sep 4 17:56:11.866620 ignition[719]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:56:11.867121 ignition[719]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 4 17:56:11.867223 ignition[719]: parsed url from cmdline: "" Sep 4 17:56:11.867228 ignition[719]: no config URL provided Sep 4 17:56:11.867233 ignition[719]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 17:56:11.867258 ignition[719]: no config at "/usr/lib/ignition/user.ign" Sep 4 17:56:11.867383 ignition[719]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Sep 4 17:56:11.868309 ignition[719]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Sep 4 17:56:11.868329 ignition[719]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Sep 4 17:56:12.086095 ignition[719]: GET result: OK Sep 4 17:56:12.086411 ignition[719]: parsing config with SHA512: ad688f10a8147bbff94103d53b8a321a0fa86e4acb42cf3fed6c6264fc226cd71ffac4729f9ad5a005cced5d0afdafd7abe73546d8544ae8e2aaf283ab78339c Sep 4 17:56:12.096475 unknown[719]: fetched base config from "system" Sep 4 17:56:12.096504 unknown[719]: fetched base config from "system" Sep 4 17:56:12.097480 ignition[719]: fetch: fetch complete Sep 4 17:56:12.096520 unknown[719]: fetched user config from "openstack" Sep 4 17:56:12.097492 ignition[719]: fetch: fetch passed Sep 4 17:56:12.101153 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 4 17:56:12.097590 ignition[719]: Ignition finished successfully Sep 4 17:56:12.109715 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 4 17:56:12.166186 ignition[725]: Ignition 2.19.0 Sep 4 17:56:12.166215 ignition[725]: Stage: kargs Sep 4 17:56:12.166683 ignition[725]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:56:12.166710 ignition[725]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 4 17:56:12.171617 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 4 17:56:12.168962 ignition[725]: kargs: kargs passed Sep 4 17:56:12.169065 ignition[725]: Ignition finished successfully Sep 4 17:56:12.182616 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 4 17:56:12.225664 ignition[731]: Ignition 2.19.0 Sep 4 17:56:12.225686 ignition[731]: Stage: disks Sep 4 17:56:12.226188 ignition[731]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:56:12.226223 ignition[731]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 4 17:56:12.231872 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 4 17:56:12.228597 ignition[731]: disks: disks passed Sep 4 17:56:12.235182 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 4 17:56:12.228702 ignition[731]: Ignition finished successfully Sep 4 17:56:12.237140 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 4 17:56:12.239556 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 17:56:12.242381 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 17:56:12.244711 systemd[1]: Reached target basic.target - Basic System. Sep 4 17:56:12.254574 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 4 17:56:12.287929 systemd-fsck[739]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Sep 4 17:56:12.296775 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 4 17:56:12.305691 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 4 17:56:12.489349 kernel: EXT4-fs (vda9): mounted filesystem bdbe0f61-2675-40b7-b9ae-5653402e9b23 r/w with ordered data mode. Quota mode: none. Sep 4 17:56:12.492203 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 4 17:56:12.494861 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 4 17:56:12.508328 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 17:56:12.510335 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 4 17:56:12.513973 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 4 17:56:12.519490 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Sep 4 17:56:12.520348 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 4 17:56:12.520379 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 17:56:12.528487 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 4 17:56:12.539161 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 4 17:56:12.550276 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (747) Sep 4 17:56:12.555533 kernel: BTRFS info (device vda6): first mount of filesystem b2463ce1-c756-4e78-b7f2-401dad24571d Sep 4 17:56:12.555562 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:56:12.555575 kernel: BTRFS info (device vda6): using free space tree Sep 4 17:56:12.563259 kernel: BTRFS info (device vda6): auto enabling async discard Sep 4 17:56:12.567418 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 17:56:12.636031 initrd-setup-root[775]: cut: /sysroot/etc/passwd: No such file or directory Sep 4 17:56:12.646640 initrd-setup-root[782]: cut: /sysroot/etc/group: No such file or directory Sep 4 17:56:12.652992 initrd-setup-root[789]: cut: /sysroot/etc/shadow: No such file or directory Sep 4 17:56:12.657913 initrd-setup-root[796]: cut: /sysroot/etc/gshadow: No such file or directory Sep 4 17:56:12.762589 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 4 17:56:12.767327 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 4 17:56:12.770413 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 4 17:56:12.779428 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 4 17:56:12.782544 kernel: BTRFS info (device vda6): last unmount of filesystem b2463ce1-c756-4e78-b7f2-401dad24571d Sep 4 17:56:12.804501 ignition[864]: INFO : Ignition 2.19.0 Sep 4 17:56:12.807311 ignition[864]: INFO : Stage: mount Sep 4 17:56:12.807826 ignition[864]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:56:12.807826 ignition[864]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 4 17:56:12.812257 ignition[864]: INFO : mount: mount passed Sep 4 17:56:12.812257 ignition[864]: INFO : Ignition finished successfully Sep 4 17:56:12.815207 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 4 17:56:12.818133 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 4 17:56:13.002032 systemd-networkd[709]: eth0: Gained IPv6LL Sep 4 17:56:19.733784 coreos-metadata[749]: Sep 04 17:56:19.733 WARN failed to locate config-drive, using the metadata service API instead Sep 4 17:56:19.774899 coreos-metadata[749]: Sep 04 17:56:19.774 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Sep 4 17:56:19.788219 coreos-metadata[749]: Sep 04 17:56:19.788 INFO Fetch successful Sep 4 17:56:19.788219 coreos-metadata[749]: Sep 04 17:56:19.788 INFO wrote hostname ci-4054-1-0-2-9cde805234.novalocal to /sysroot/etc/hostname Sep 4 17:56:19.792466 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Sep 4 17:56:19.792839 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Sep 4 17:56:19.803397 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 4 17:56:19.846617 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 17:56:19.861382 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (881) Sep 4 17:56:19.869336 kernel: BTRFS info (device vda6): first mount of filesystem b2463ce1-c756-4e78-b7f2-401dad24571d Sep 4 17:56:19.869408 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:56:19.871773 kernel: BTRFS info (device vda6): using free space tree Sep 4 17:56:19.880316 kernel: BTRFS info (device vda6): auto enabling async discard Sep 4 17:56:19.885421 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 17:56:19.928371 ignition[899]: INFO : Ignition 2.19.0 Sep 4 17:56:19.928371 ignition[899]: INFO : Stage: files Sep 4 17:56:19.931465 ignition[899]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:56:19.931465 ignition[899]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 4 17:56:19.931465 ignition[899]: DEBUG : files: compiled without relabeling support, skipping Sep 4 17:56:19.936813 ignition[899]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 4 17:56:19.936813 ignition[899]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 4 17:56:19.940835 ignition[899]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 4 17:56:19.940835 ignition[899]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 4 17:56:19.940835 ignition[899]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 4 17:56:19.939733 unknown[899]: wrote ssh authorized keys file for user: core Sep 4 17:56:19.948595 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 4 17:56:19.948595 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 4 17:56:20.649735 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 4 17:56:21.005212 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 4 17:56:21.005212 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 4 17:56:21.009725 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 4 17:56:21.009725 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 4 17:56:21.009725 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 4 17:56:21.009725 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 17:56:21.009725 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 17:56:21.009725 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 17:56:21.009725 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 17:56:21.009725 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 17:56:21.009725 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 17:56:21.009725 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Sep 4 17:56:21.009725 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Sep 4 17:56:21.009725 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Sep 4 17:56:21.009725 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Sep 4 17:56:21.544532 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 4 17:56:23.224194 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Sep 4 17:56:23.224194 ignition[899]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 4 17:56:23.293170 ignition[899]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 17:56:23.293170 ignition[899]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 17:56:23.293170 ignition[899]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 4 17:56:23.293170 ignition[899]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 4 17:56:23.304158 ignition[899]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 4 17:56:23.304158 ignition[899]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 4 17:56:23.304158 ignition[899]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 4 17:56:23.304158 ignition[899]: INFO : files: files passed Sep 4 17:56:23.304158 ignition[899]: INFO : Ignition finished successfully Sep 4 17:56:23.297080 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 4 17:56:23.311483 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 4 17:56:23.317675 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 4 17:56:23.319644 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 4 17:56:23.319775 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 4 17:56:23.350283 initrd-setup-root-after-ignition[927]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:56:23.352896 initrd-setup-root-after-ignition[927]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:56:23.356092 initrd-setup-root-after-ignition[931]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:56:23.357798 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 17:56:23.360735 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 4 17:56:23.368565 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 4 17:56:23.417769 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 4 17:56:23.418005 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 4 17:56:23.422143 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 4 17:56:23.424550 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 4 17:56:23.427361 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 4 17:56:23.436664 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 4 17:56:23.478704 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 17:56:23.487530 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 4 17:56:23.519720 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:56:23.521016 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:56:23.523608 systemd[1]: Stopped target timers.target - Timer Units. Sep 4 17:56:23.525944 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 4 17:56:23.526190 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 17:56:23.528851 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 4 17:56:23.530339 systemd[1]: Stopped target basic.target - Basic System. Sep 4 17:56:23.532634 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 4 17:56:23.534723 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 17:56:23.536655 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 4 17:56:23.539045 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 4 17:56:23.541436 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 17:56:23.543898 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 4 17:56:23.546174 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 4 17:56:23.548594 systemd[1]: Stopped target swap.target - Swaps. Sep 4 17:56:23.550780 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 4 17:56:23.551103 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 4 17:56:23.553157 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:56:23.554267 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:56:23.555800 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 4 17:56:23.555926 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:56:23.557802 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 4 17:56:23.557974 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 4 17:56:23.560664 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 4 17:56:23.560832 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 17:56:23.561739 systemd[1]: ignition-files.service: Deactivated successfully. Sep 4 17:56:23.561857 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 4 17:56:23.572339 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 4 17:56:23.576428 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 4 17:56:23.579586 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 4 17:56:23.579824 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:56:23.584692 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 4 17:56:23.585671 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 17:56:23.593376 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 4 17:56:23.594393 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 4 17:56:23.596086 ignition[951]: INFO : Ignition 2.19.0 Sep 4 17:56:23.596086 ignition[951]: INFO : Stage: umount Sep 4 17:56:23.596086 ignition[951]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:56:23.596086 ignition[951]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 4 17:56:23.596086 ignition[951]: INFO : umount: umount passed Sep 4 17:56:23.603895 ignition[951]: INFO : Ignition finished successfully Sep 4 17:56:23.601340 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 4 17:56:23.601455 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 4 17:56:23.602519 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 4 17:56:23.602630 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 4 17:56:23.603205 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 4 17:56:23.605386 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 4 17:56:23.606637 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 4 17:56:23.606682 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 4 17:56:23.609551 systemd[1]: Stopped target network.target - Network. Sep 4 17:56:23.610857 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 4 17:56:23.610907 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 17:56:23.611454 systemd[1]: Stopped target paths.target - Path Units. Sep 4 17:56:23.612497 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 4 17:56:23.616298 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:56:23.616883 systemd[1]: Stopped target slices.target - Slice Units. Sep 4 17:56:23.618114 systemd[1]: Stopped target sockets.target - Socket Units. Sep 4 17:56:23.619150 systemd[1]: iscsid.socket: Deactivated successfully. Sep 4 17:56:23.619187 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 17:56:23.620102 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 4 17:56:23.620135 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 17:56:23.621036 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 4 17:56:23.621079 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 4 17:56:23.622099 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 4 17:56:23.622142 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 4 17:56:23.623260 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 4 17:56:23.624380 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 4 17:56:23.626344 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 4 17:56:23.626866 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 4 17:56:23.626946 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 4 17:56:23.628042 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 4 17:56:23.628111 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 4 17:56:23.628618 systemd-networkd[709]: eth0: DHCPv6 lease lost Sep 4 17:56:23.631982 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 4 17:56:23.632078 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 4 17:56:23.633985 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 4 17:56:23.634094 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 4 17:56:23.635487 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 4 17:56:23.635670 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:56:23.641631 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 4 17:56:23.642187 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 4 17:56:23.642272 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 17:56:23.642925 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 17:56:23.642972 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:56:23.644046 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 4 17:56:23.644090 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 4 17:56:23.645148 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 4 17:56:23.645188 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 17:56:23.646515 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:56:23.655483 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 4 17:56:23.655600 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 4 17:56:23.656622 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 4 17:56:23.656758 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:56:23.658531 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 4 17:56:23.658587 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 4 17:56:23.659183 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 4 17:56:23.659217 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:56:23.660503 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 4 17:56:23.660548 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 4 17:56:23.662324 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 4 17:56:23.662368 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 4 17:56:23.663492 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 17:56:23.663536 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:56:23.675423 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 4 17:56:23.676694 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 4 17:56:23.676776 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:56:23.677995 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 4 17:56:23.678057 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:56:23.678905 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 4 17:56:23.678950 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:56:23.680143 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:56:23.680185 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:56:23.681812 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 4 17:56:23.681913 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 4 17:56:23.683092 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 4 17:56:23.692632 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 4 17:56:23.699767 systemd[1]: Switching root. Sep 4 17:56:23.735967 systemd-journald[185]: Journal stopped Sep 4 17:56:25.322137 systemd-journald[185]: Received SIGTERM from PID 1 (systemd). Sep 4 17:56:25.322199 kernel: SELinux: policy capability network_peer_controls=1 Sep 4 17:56:25.322220 kernel: SELinux: policy capability open_perms=1 Sep 4 17:56:25.322232 kernel: SELinux: policy capability extended_socket_class=1 Sep 4 17:56:25.322263 kernel: SELinux: policy capability always_check_network=0 Sep 4 17:56:25.322277 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 4 17:56:25.322289 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 4 17:56:25.322302 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 4 17:56:25.322318 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 4 17:56:25.322330 kernel: audit: type=1403 audit(1725472584.220:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 4 17:56:25.322345 systemd[1]: Successfully loaded SELinux policy in 62.169ms. Sep 4 17:56:25.322370 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 32.269ms. Sep 4 17:56:25.322384 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 4 17:56:25.322397 systemd[1]: Detected virtualization kvm. Sep 4 17:56:25.322413 systemd[1]: Detected architecture x86-64. Sep 4 17:56:25.322425 systemd[1]: Detected first boot. Sep 4 17:56:25.322440 systemd[1]: Hostname set to . Sep 4 17:56:25.322453 systemd[1]: Initializing machine ID from VM UUID. Sep 4 17:56:25.322465 zram_generator::config[992]: No configuration found. Sep 4 17:56:25.322479 systemd[1]: Populated /etc with preset unit settings. Sep 4 17:56:25.322492 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 4 17:56:25.322504 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 4 17:56:25.322518 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 4 17:56:25.322536 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 4 17:56:25.322550 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 4 17:56:25.322565 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 4 17:56:25.322578 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 4 17:56:25.322591 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 4 17:56:25.322606 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 4 17:56:25.322619 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 4 17:56:25.322632 systemd[1]: Created slice user.slice - User and Session Slice. Sep 4 17:56:25.322649 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:56:25.322662 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:56:25.322677 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 4 17:56:25.322690 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 4 17:56:25.322703 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 4 17:56:25.322715 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 17:56:25.322728 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 4 17:56:25.322740 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:56:25.322753 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 4 17:56:25.322766 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 4 17:56:25.322782 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 4 17:56:25.322794 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 4 17:56:25.322806 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:56:25.322820 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 17:56:25.322840 systemd[1]: Reached target slices.target - Slice Units. Sep 4 17:56:25.322858 systemd[1]: Reached target swap.target - Swaps. Sep 4 17:56:25.322878 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 4 17:56:25.322898 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 4 17:56:25.322921 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:56:25.322937 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 17:56:25.322950 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:56:25.322964 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 4 17:56:25.322978 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 4 17:56:25.322991 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 4 17:56:25.323004 systemd[1]: Mounting media.mount - External Media Directory... Sep 4 17:56:25.323018 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:56:25.323034 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 4 17:56:25.323048 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 4 17:56:25.323061 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 4 17:56:25.323075 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 4 17:56:25.323088 systemd[1]: Reached target machines.target - Containers. Sep 4 17:56:25.323102 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 4 17:56:25.323116 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:56:25.323129 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 17:56:25.323142 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 4 17:56:25.323158 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:56:25.323172 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 17:56:25.323185 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:56:25.323198 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 4 17:56:25.323213 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:56:25.323227 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 4 17:56:25.323262 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 4 17:56:25.323278 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 4 17:56:25.323294 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 4 17:56:25.323308 systemd[1]: Stopped systemd-fsck-usr.service. Sep 4 17:56:25.323327 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 17:56:25.323347 kernel: loop: module loaded Sep 4 17:56:25.323362 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 17:56:25.323376 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 17:56:25.323389 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 4 17:56:25.323402 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 17:56:25.323416 systemd[1]: verity-setup.service: Deactivated successfully. Sep 4 17:56:25.323433 systemd[1]: Stopped verity-setup.service. Sep 4 17:56:25.323446 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:56:25.323478 systemd-journald[1080]: Collecting audit messages is disabled. Sep 4 17:56:25.323509 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 4 17:56:25.323522 systemd-journald[1080]: Journal started Sep 4 17:56:25.323549 systemd-journald[1080]: Runtime Journal (/run/log/journal/aebaa87ea37f4d12a46d1803b0adfa94) is 4.9M, max 39.3M, 34.4M free. Sep 4 17:56:24.979175 systemd[1]: Queued start job for default target multi-user.target. Sep 4 17:56:25.010929 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 4 17:56:25.011910 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 4 17:56:25.325302 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 17:56:25.343628 kernel: fuse: init (API version 7.39) Sep 4 17:56:25.328004 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 4 17:56:25.329904 systemd[1]: Mounted media.mount - External Media Directory. Sep 4 17:56:25.330513 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 4 17:56:25.331070 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 4 17:56:25.333386 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 4 17:56:25.334126 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:56:25.334910 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 4 17:56:25.335042 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 4 17:56:25.335770 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:56:25.335883 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:56:25.336890 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:56:25.337046 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:56:25.338503 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:56:25.338628 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:56:25.340268 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 17:56:25.340960 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 4 17:56:25.347593 kernel: ACPI: bus type drm_connector registered Sep 4 17:56:25.354313 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 17:56:25.354484 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 17:56:25.355371 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 4 17:56:25.355505 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 4 17:56:25.356335 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 17:56:25.360918 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 17:56:25.370274 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 4 17:56:25.377437 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 4 17:56:25.378511 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 4 17:56:25.378618 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 17:56:25.383789 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 4 17:56:25.391066 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 4 17:56:25.398873 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 4 17:56:25.399923 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:56:25.404741 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 4 17:56:25.414464 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 4 17:56:25.415639 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 17:56:25.417414 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 4 17:56:25.418688 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 17:56:25.422415 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:56:25.431538 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 4 17:56:25.435436 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 17:56:25.439392 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 4 17:56:25.440379 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 4 17:56:25.442411 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 4 17:56:25.444558 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 4 17:56:25.458167 systemd-journald[1080]: Time spent on flushing to /var/log/journal/aebaa87ea37f4d12a46d1803b0adfa94 is 75.793ms for 938 entries. Sep 4 17:56:25.458167 systemd-journald[1080]: System Journal (/var/log/journal/aebaa87ea37f4d12a46d1803b0adfa94) is 8.0M, max 584.8M, 576.8M free. Sep 4 17:56:25.575091 systemd-journald[1080]: Received client request to flush runtime journal. Sep 4 17:56:25.575146 kernel: loop0: detected capacity change from 0 to 140728 Sep 4 17:56:25.507374 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 4 17:56:25.509160 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 4 17:56:25.518469 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 4 17:56:25.520168 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:56:25.546169 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:56:25.554363 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 4 17:56:25.562275 systemd-tmpfiles[1124]: ACLs are not supported, ignoring. Sep 4 17:56:25.562292 systemd-tmpfiles[1124]: ACLs are not supported, ignoring. Sep 4 17:56:25.576817 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:56:25.587332 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 4 17:56:25.588911 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 4 17:56:25.598912 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 4 17:56:25.610305 udevadm[1136]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 4 17:56:25.640286 kernel: loop1: detected capacity change from 0 to 210664 Sep 4 17:56:25.649124 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 4 17:56:25.649862 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 4 17:56:25.691987 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 4 17:56:25.703504 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 17:56:25.747206 systemd-tmpfiles[1146]: ACLs are not supported, ignoring. Sep 4 17:56:25.747234 systemd-tmpfiles[1146]: ACLs are not supported, ignoring. Sep 4 17:56:25.754947 kernel: loop2: detected capacity change from 0 to 89336 Sep 4 17:56:25.753432 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:56:25.810305 kernel: loop3: detected capacity change from 0 to 8 Sep 4 17:56:25.831279 kernel: loop4: detected capacity change from 0 to 140728 Sep 4 17:56:25.969412 kernel: loop5: detected capacity change from 0 to 210664 Sep 4 17:56:26.039420 kernel: loop6: detected capacity change from 0 to 89336 Sep 4 17:56:26.119693 kernel: loop7: detected capacity change from 0 to 8 Sep 4 17:56:26.119624 (sd-merge)[1152]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Sep 4 17:56:26.121299 (sd-merge)[1152]: Merged extensions into '/usr'. Sep 4 17:56:26.129679 systemd[1]: Reloading requested from client PID 1123 ('systemd-sysext') (unit systemd-sysext.service)... Sep 4 17:56:26.129709 systemd[1]: Reloading... Sep 4 17:56:26.206267 zram_generator::config[1173]: No configuration found. Sep 4 17:56:26.423233 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:56:26.483800 systemd[1]: Reloading finished in 353 ms. Sep 4 17:56:26.509447 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 4 17:56:26.518420 systemd[1]: Starting ensure-sysext.service... Sep 4 17:56:26.522167 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 17:56:26.538457 systemd[1]: Reloading requested from client PID 1231 ('systemctl') (unit ensure-sysext.service)... Sep 4 17:56:26.538472 systemd[1]: Reloading... Sep 4 17:56:26.587438 systemd-tmpfiles[1232]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 4 17:56:26.587785 systemd-tmpfiles[1232]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 4 17:56:26.588652 systemd-tmpfiles[1232]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 4 17:56:26.588945 systemd-tmpfiles[1232]: ACLs are not supported, ignoring. Sep 4 17:56:26.589001 systemd-tmpfiles[1232]: ACLs are not supported, ignoring. Sep 4 17:56:26.594819 systemd-tmpfiles[1232]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 17:56:26.595039 systemd-tmpfiles[1232]: Skipping /boot Sep 4 17:56:26.603917 systemd-tmpfiles[1232]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 17:56:26.604155 systemd-tmpfiles[1232]: Skipping /boot Sep 4 17:56:26.608337 zram_generator::config[1261]: No configuration found. Sep 4 17:56:26.710926 ldconfig[1118]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 4 17:56:26.778008 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:56:26.839335 systemd[1]: Reloading finished in 300 ms. Sep 4 17:56:26.856062 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 4 17:56:26.857225 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 4 17:56:26.862625 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 17:56:26.879418 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 4 17:56:26.896499 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 4 17:56:26.899418 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 4 17:56:26.903899 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 17:56:26.916443 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:56:26.921533 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 4 17:56:26.928950 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 4 17:56:26.932349 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:56:26.932566 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:56:26.940576 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:56:26.945359 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:56:26.955549 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:56:26.956352 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:56:26.956515 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:56:26.960360 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:56:26.960556 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:56:26.960733 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:56:26.960852 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:56:26.964863 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:56:26.965102 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:56:26.974925 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 17:56:26.977150 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:56:26.977392 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:56:26.981732 systemd[1]: Finished ensure-sysext.service. Sep 4 17:56:26.998441 systemd-udevd[1322]: Using default interface naming scheme 'v255'. Sep 4 17:56:26.998802 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 4 17:56:27.000565 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:56:27.001356 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:56:27.019192 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 17:56:27.019562 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 17:56:27.028805 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:56:27.028961 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:56:27.030600 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 17:56:27.036527 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:56:27.036987 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:56:27.039288 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 4 17:56:27.048638 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 17:56:27.053648 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 4 17:56:27.069057 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 4 17:56:27.081471 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 4 17:56:27.083307 augenrules[1357]: No rules Sep 4 17:56:27.083020 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 4 17:56:27.084675 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 4 17:56:27.090478 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 4 17:56:27.103030 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:56:27.113496 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 17:56:27.131446 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 4 17:56:27.161420 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 4 17:56:27.162145 systemd[1]: Reached target time-set.target - System Time Set. Sep 4 17:56:27.210273 systemd-networkd[1366]: lo: Link UP Sep 4 17:56:27.210283 systemd-networkd[1366]: lo: Gained carrier Sep 4 17:56:27.210812 systemd-networkd[1366]: Enumeration completed Sep 4 17:56:27.210925 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 17:56:27.214462 systemd-resolved[1321]: Positive Trust Anchors: Sep 4 17:56:27.214482 systemd-resolved[1321]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 17:56:27.214527 systemd-resolved[1321]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 17:56:27.218496 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 4 17:56:27.225896 systemd-resolved[1321]: Using system hostname 'ci-4054-1-0-2-9cde805234.novalocal'. Sep 4 17:56:27.228889 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 17:56:27.229858 systemd[1]: Reached target network.target - Network. Sep 4 17:56:27.230609 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:56:27.252281 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1369) Sep 4 17:56:27.259816 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 4 17:56:27.260286 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1378) Sep 4 17:56:27.283321 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1378) Sep 4 17:56:27.309467 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Sep 4 17:56:27.316376 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 4 17:56:27.321266 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 4 17:56:27.323439 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 4 17:56:27.346264 kernel: ACPI: button: Power Button [PWRF] Sep 4 17:56:27.347440 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 4 17:56:27.347815 systemd-networkd[1366]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:56:27.348403 systemd-networkd[1366]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 17:56:27.349919 systemd-networkd[1366]: eth0: Link UP Sep 4 17:56:27.350194 systemd-networkd[1366]: eth0: Gained carrier Sep 4 17:56:27.350213 systemd-networkd[1366]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:56:27.363372 systemd-networkd[1366]: eth0: DHCPv4 address 172.24.4.18/24, gateway 172.24.4.1 acquired from 172.24.4.1 Sep 4 17:56:27.364502 systemd-timesyncd[1337]: Network configuration changed, trying to establish connection. Sep 4 17:56:27.368474 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 4 17:56:27.400660 kernel: mousedev: PS/2 mouse device common for all mice Sep 4 17:56:27.399084 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:56:27.426300 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Sep 4 17:56:27.428277 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Sep 4 17:56:27.432894 kernel: Console: switching to colour dummy device 80x25 Sep 4 17:56:27.432979 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Sep 4 17:56:27.432999 kernel: [drm] features: -context_init Sep 4 17:56:27.433016 kernel: [drm] number of scanouts: 1 Sep 4 17:56:27.433099 kernel: [drm] number of cap sets: 0 Sep 4 17:56:27.437272 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Sep 4 17:56:27.440133 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:56:27.440669 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:56:27.448362 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Sep 4 17:56:27.448608 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:56:27.455080 kernel: Console: switching to colour frame buffer device 128x48 Sep 4 17:56:27.459696 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Sep 4 17:56:27.467122 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:56:27.467578 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:56:27.476403 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:56:27.478332 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 4 17:56:27.481390 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 4 17:56:27.511944 lvm[1409]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 17:56:27.542144 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 4 17:56:27.543639 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:56:27.552578 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 4 17:56:27.563702 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:56:27.564037 lvm[1414]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 17:56:27.565228 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 17:56:27.565464 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 4 17:56:27.565595 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 4 17:56:27.565893 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 4 17:56:27.566086 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 4 17:56:27.566178 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 4 17:56:27.567669 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 4 17:56:27.567700 systemd[1]: Reached target paths.target - Path Units. Sep 4 17:56:27.568352 systemd[1]: Reached target timers.target - Timer Units. Sep 4 17:56:27.570190 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 4 17:56:27.572578 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 4 17:56:27.579658 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 4 17:56:27.580357 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 4 17:56:27.581125 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 17:56:27.583199 systemd[1]: Reached target basic.target - Basic System. Sep 4 17:56:27.589584 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 4 17:56:27.589615 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 4 17:56:27.598430 systemd[1]: Starting containerd.service - containerd container runtime... Sep 4 17:56:27.604444 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 4 17:56:27.609995 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 4 17:56:27.615627 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 4 17:56:27.626460 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 4 17:56:27.631775 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 4 17:56:27.634453 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 4 17:56:27.642426 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 4 17:56:27.645836 jq[1424]: false Sep 4 17:56:27.649531 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 4 17:56:27.655442 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 4 17:56:27.669482 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 4 17:56:27.674969 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 4 17:56:27.675716 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 4 17:56:27.681435 systemd[1]: Starting update-engine.service - Update Engine... Sep 4 17:56:27.686413 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 4 17:56:27.690306 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 4 17:56:27.703831 dbus-daemon[1421]: [system] SELinux support is enabled Sep 4 17:56:27.705702 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 4 17:56:27.712567 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 4 17:56:27.712832 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 4 17:56:27.722964 jq[1436]: true Sep 4 17:56:27.713336 systemd[1]: motdgen.service: Deactivated successfully. Sep 4 17:56:27.714471 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 4 17:56:27.720411 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 4 17:56:27.720651 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 4 17:56:27.740345 extend-filesystems[1425]: Found loop4 Sep 4 17:56:27.740345 extend-filesystems[1425]: Found loop5 Sep 4 17:56:27.740345 extend-filesystems[1425]: Found loop6 Sep 4 17:56:27.740345 extend-filesystems[1425]: Found loop7 Sep 4 17:56:27.740345 extend-filesystems[1425]: Found vda Sep 4 17:56:27.740345 extend-filesystems[1425]: Found vda1 Sep 4 17:56:27.740345 extend-filesystems[1425]: Found vda2 Sep 4 17:56:27.740345 extend-filesystems[1425]: Found vda3 Sep 4 17:56:27.740345 extend-filesystems[1425]: Found usr Sep 4 17:56:27.740345 extend-filesystems[1425]: Found vda4 Sep 4 17:56:27.740345 extend-filesystems[1425]: Found vda6 Sep 4 17:56:27.824776 update_engine[1433]: I0904 17:56:27.732084 1433 main.cc:92] Flatcar Update Engine starting Sep 4 17:56:27.824776 update_engine[1433]: I0904 17:56:27.761461 1433 update_check_scheduler.cc:74] Next update check in 3m6s Sep 4 17:56:27.825184 extend-filesystems[1425]: Found vda7 Sep 4 17:56:27.825184 extend-filesystems[1425]: Found vda9 Sep 4 17:56:27.825184 extend-filesystems[1425]: Checking size of /dev/vda9 Sep 4 17:56:27.825184 extend-filesystems[1425]: Resized partition /dev/vda9 Sep 4 17:56:27.762877 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 4 17:56:27.833231 jq[1445]: true Sep 4 17:56:27.833593 extend-filesystems[1463]: resize2fs 1.47.1 (20-May-2024) Sep 4 17:56:27.762925 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 4 17:56:27.773417 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 4 17:56:27.834505 tar[1444]: linux-amd64/helm Sep 4 17:56:27.773454 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 4 17:56:27.777924 systemd[1]: Started update-engine.service - Update Engine. Sep 4 17:56:27.782605 (ntainerd)[1447]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 4 17:56:27.799450 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 4 17:56:27.839611 systemd-logind[1432]: New seat seat0. Sep 4 17:56:27.848308 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 4635643 blocks Sep 4 17:56:27.857136 systemd-logind[1432]: Watching system buttons on /dev/input/event1 (Power Button) Sep 4 17:56:27.857162 systemd-logind[1432]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 4 17:56:27.858827 systemd[1]: Started systemd-logind.service - User Login Management. Sep 4 17:56:27.897401 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1376) Sep 4 17:56:28.017683 locksmithd[1457]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 4 17:56:28.064454 kernel: EXT4-fs (vda9): resized filesystem to 4635643 Sep 4 17:56:28.136602 bash[1481]: Updated "/home/core/.ssh/authorized_keys" Sep 4 17:56:28.135736 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 4 17:56:28.165285 extend-filesystems[1463]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 4 17:56:28.165285 extend-filesystems[1463]: old_desc_blocks = 1, new_desc_blocks = 3 Sep 4 17:56:28.165285 extend-filesystems[1463]: The filesystem on /dev/vda9 is now 4635643 (4k) blocks long. Sep 4 17:56:28.150582 systemd[1]: Starting sshkeys.service... Sep 4 17:56:28.175145 extend-filesystems[1425]: Resized filesystem in /dev/vda9 Sep 4 17:56:28.155936 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 4 17:56:28.156598 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 4 17:56:28.203107 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 4 17:56:28.215663 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 4 17:56:28.278354 sshd_keygen[1443]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 4 17:56:28.324675 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 4 17:56:28.359107 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 4 17:56:28.375835 systemd[1]: issuegen.service: Deactivated successfully. Sep 4 17:56:28.376136 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 4 17:56:28.394082 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 4 17:56:28.436818 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 4 17:56:28.444493 containerd[1447]: time="2024-09-04T17:56:28.444227684Z" level=info msg="starting containerd" revision=8ccfc03e4e2b73c22899202ae09d0caf906d3863 version=v1.7.20 Sep 4 17:56:28.454779 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 4 17:56:28.466930 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 4 17:56:28.472998 systemd[1]: Reached target getty.target - Login Prompts. Sep 4 17:56:28.485470 containerd[1447]: time="2024-09-04T17:56:28.485168949Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:56:28.489013 containerd[1447]: time="2024-09-04T17:56:28.488389269Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.48-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:56:28.489013 containerd[1447]: time="2024-09-04T17:56:28.488441637Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 4 17:56:28.489013 containerd[1447]: time="2024-09-04T17:56:28.488466704Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 4 17:56:28.489013 containerd[1447]: time="2024-09-04T17:56:28.488671909Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 4 17:56:28.489013 containerd[1447]: time="2024-09-04T17:56:28.488693179Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 4 17:56:28.489013 containerd[1447]: time="2024-09-04T17:56:28.488767298Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:56:28.489013 containerd[1447]: time="2024-09-04T17:56:28.488784119Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:56:28.489013 containerd[1447]: time="2024-09-04T17:56:28.488959348Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:56:28.489013 containerd[1447]: time="2024-09-04T17:56:28.488979145Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 4 17:56:28.489013 containerd[1447]: time="2024-09-04T17:56:28.488994804Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:56:28.489013 containerd[1447]: time="2024-09-04T17:56:28.489007368Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 4 17:56:28.489329 containerd[1447]: time="2024-09-04T17:56:28.489229304Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:56:28.489658 containerd[1447]: time="2024-09-04T17:56:28.489515100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:56:28.489658 containerd[1447]: time="2024-09-04T17:56:28.489624455Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:56:28.489658 containerd[1447]: time="2024-09-04T17:56:28.489642730Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 4 17:56:28.490304 containerd[1447]: time="2024-09-04T17:56:28.489754950Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 4 17:56:28.490304 containerd[1447]: time="2024-09-04T17:56:28.489818459Z" level=info msg="metadata content store policy set" policy=shared Sep 4 17:56:28.507199 containerd[1447]: time="2024-09-04T17:56:28.507153501Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 4 17:56:28.507702 containerd[1447]: time="2024-09-04T17:56:28.507462100Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 4 17:56:28.507702 containerd[1447]: time="2024-09-04T17:56:28.507562178Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 4 17:56:28.507702 containerd[1447]: time="2024-09-04T17:56:28.507585962Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 4 17:56:28.507702 containerd[1447]: time="2024-09-04T17:56:28.507634754Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 4 17:56:28.508346 containerd[1447]: time="2024-09-04T17:56:28.508127718Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 4 17:56:28.508798 containerd[1447]: time="2024-09-04T17:56:28.508748102Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 4 17:56:28.509143 containerd[1447]: time="2024-09-04T17:56:28.509052773Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 4 17:56:28.509143 containerd[1447]: time="2024-09-04T17:56:28.509079012Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 4 17:56:28.509435 containerd[1447]: time="2024-09-04T17:56:28.509269129Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 4 17:56:28.509435 containerd[1447]: time="2024-09-04T17:56:28.509296641Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 4 17:56:28.509435 containerd[1447]: time="2024-09-04T17:56:28.509312951Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 4 17:56:28.509652 containerd[1447]: time="2024-09-04T17:56:28.509404192Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 4 17:56:28.509652 containerd[1447]: time="2024-09-04T17:56:28.509556688Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 4 17:56:28.509652 containerd[1447]: time="2024-09-04T17:56:28.509577407Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 4 17:56:28.509652 containerd[1447]: time="2024-09-04T17:56:28.509592776Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 4 17:56:28.510146 containerd[1447]: time="2024-09-04T17:56:28.509772082Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 4 17:56:28.510146 containerd[1447]: time="2024-09-04T17:56:28.509794104Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 4 17:56:28.510146 containerd[1447]: time="2024-09-04T17:56:28.509819952Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 4 17:56:28.510146 containerd[1447]: time="2024-09-04T17:56:28.509838006Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 4 17:56:28.510146 containerd[1447]: time="2024-09-04T17:56:28.509853465Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 4 17:56:28.510146 containerd[1447]: time="2024-09-04T17:56:28.509869735Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 4 17:56:28.510146 containerd[1447]: time="2024-09-04T17:56:28.509890314Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 4 17:56:28.510146 containerd[1447]: time="2024-09-04T17:56:28.509908759Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 4 17:56:28.510146 containerd[1447]: time="2024-09-04T17:56:28.509923476Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 4 17:56:28.510146 containerd[1447]: time="2024-09-04T17:56:28.509939106Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 4 17:56:28.510146 containerd[1447]: time="2024-09-04T17:56:28.509955065Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 4 17:56:28.510146 containerd[1447]: time="2024-09-04T17:56:28.509973490Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 4 17:56:28.510146 containerd[1447]: time="2024-09-04T17:56:28.509988288Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 4 17:56:28.510146 containerd[1447]: time="2024-09-04T17:56:28.510004127Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 4 17:56:28.510515 containerd[1447]: time="2024-09-04T17:56:28.510022031Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 4 17:56:28.510515 containerd[1447]: time="2024-09-04T17:56:28.510042550Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 4 17:56:28.510515 containerd[1447]: time="2024-09-04T17:56:28.510067617Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 4 17:56:28.510515 containerd[1447]: time="2024-09-04T17:56:28.510083206Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 4 17:56:28.510515 containerd[1447]: time="2024-09-04T17:56:28.510096150Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 4 17:56:28.511794 containerd[1447]: time="2024-09-04T17:56:28.510653135Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 4 17:56:28.511794 containerd[1447]: time="2024-09-04T17:56:28.510686327Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 4 17:56:28.511794 containerd[1447]: time="2024-09-04T17:56:28.510781395Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 4 17:56:28.511794 containerd[1447]: time="2024-09-04T17:56:28.510814968Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 4 17:56:28.511794 containerd[1447]: time="2024-09-04T17:56:28.510829485Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 4 17:56:28.511794 containerd[1447]: time="2024-09-04T17:56:28.510847860Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 4 17:56:28.511794 containerd[1447]: time="2024-09-04T17:56:28.510864030Z" level=info msg="NRI interface is disabled by configuration." Sep 4 17:56:28.511794 containerd[1447]: time="2024-09-04T17:56:28.510875852Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 4 17:56:28.511987 containerd[1447]: time="2024-09-04T17:56:28.511197305Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 4 17:56:28.511987 containerd[1447]: time="2024-09-04T17:56:28.511290119Z" level=info msg="Connect containerd service" Sep 4 17:56:28.511987 containerd[1447]: time="2024-09-04T17:56:28.511317641Z" level=info msg="using legacy CRI server" Sep 4 17:56:28.511987 containerd[1447]: time="2024-09-04T17:56:28.511325446Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 4 17:56:28.511987 containerd[1447]: time="2024-09-04T17:56:28.511415424Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 4 17:56:28.512693 containerd[1447]: time="2024-09-04T17:56:28.512669035Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 17:56:28.512891 containerd[1447]: time="2024-09-04T17:56:28.512855145Z" level=info msg="Start subscribing containerd event" Sep 4 17:56:28.512979 containerd[1447]: time="2024-09-04T17:56:28.512964389Z" level=info msg="Start recovering state" Sep 4 17:56:28.513090 containerd[1447]: time="2024-09-04T17:56:28.513075548Z" level=info msg="Start event monitor" Sep 4 17:56:28.513532 containerd[1447]: time="2024-09-04T17:56:28.513515473Z" level=info msg="Start snapshots syncer" Sep 4 17:56:28.513597 containerd[1447]: time="2024-09-04T17:56:28.513583721Z" level=info msg="Start cni network conf syncer for default" Sep 4 17:56:28.513651 containerd[1447]: time="2024-09-04T17:56:28.513639185Z" level=info msg="Start streaming server" Sep 4 17:56:28.513806 containerd[1447]: time="2024-09-04T17:56:28.513481359Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 4 17:56:28.513955 containerd[1447]: time="2024-09-04T17:56:28.513939268Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 4 17:56:28.514147 systemd[1]: Started containerd.service - containerd container runtime. Sep 4 17:56:28.517350 containerd[1447]: time="2024-09-04T17:56:28.517313406Z" level=info msg="containerd successfully booted in 0.074023s" Sep 4 17:56:28.676968 tar[1444]: linux-amd64/LICENSE Sep 4 17:56:28.677121 tar[1444]: linux-amd64/README.md Sep 4 17:56:28.689660 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 4 17:56:29.193910 systemd-networkd[1366]: eth0: Gained IPv6LL Sep 4 17:56:29.196415 systemd-timesyncd[1337]: Network configuration changed, trying to establish connection. Sep 4 17:56:29.203442 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 4 17:56:29.206417 systemd[1]: Reached target network-online.target - Network is Online. Sep 4 17:56:29.222679 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:56:29.228587 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 4 17:56:29.275936 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 4 17:56:29.788915 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 4 17:56:29.799802 systemd[1]: Started sshd@0-172.24.4.18:22-172.24.4.1:51410.service - OpenSSH per-connection server daemon (172.24.4.1:51410). Sep 4 17:56:30.851154 sshd[1532]: Accepted publickey for core from 172.24.4.1 port 51410 ssh2: RSA SHA256:JnA7Fh8lVkr6ENifNOXj431OPLJBOL+/PI8dMas4Eok Sep 4 17:56:30.857574 sshd[1532]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:56:30.886312 systemd-logind[1432]: New session 1 of user core. Sep 4 17:56:30.887809 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 4 17:56:30.895984 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 4 17:56:30.912207 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 4 17:56:30.923801 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 4 17:56:30.937319 (systemd)[1537]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:56:31.116044 systemd[1537]: Queued start job for default target default.target. Sep 4 17:56:31.124761 systemd[1537]: Created slice app.slice - User Application Slice. Sep 4 17:56:31.124906 systemd[1537]: Reached target paths.target - Paths. Sep 4 17:56:31.124986 systemd[1537]: Reached target timers.target - Timers. Sep 4 17:56:31.127955 systemd[1537]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 4 17:56:31.162488 systemd[1537]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 4 17:56:31.162746 systemd[1537]: Reached target sockets.target - Sockets. Sep 4 17:56:31.162784 systemd[1537]: Reached target basic.target - Basic System. Sep 4 17:56:31.162883 systemd[1537]: Reached target default.target - Main User Target. Sep 4 17:56:31.162941 systemd[1537]: Startup finished in 217ms. Sep 4 17:56:31.163950 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 4 17:56:31.167776 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:56:31.179666 (kubelet)[1549]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:56:31.183026 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 4 17:56:31.696324 systemd[1]: Started sshd@1-172.24.4.18:22-172.24.4.1:51424.service - OpenSSH per-connection server daemon (172.24.4.1:51424). Sep 4 17:56:32.420786 kubelet[1549]: E0904 17:56:32.420693 1549 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:56:32.422872 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:56:32.423022 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:56:32.423343 systemd[1]: kubelet.service: Consumed 1.996s CPU time. Sep 4 17:56:33.528303 login[1512]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 4 17:56:33.530448 login[1510]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 4 17:56:33.551371 systemd-logind[1432]: New session 2 of user core. Sep 4 17:56:33.569848 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 4 17:56:33.576466 systemd-logind[1432]: New session 3 of user core. Sep 4 17:56:33.584001 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 4 17:56:33.956779 sshd[1559]: Accepted publickey for core from 172.24.4.1 port 51424 ssh2: RSA SHA256:JnA7Fh8lVkr6ENifNOXj431OPLJBOL+/PI8dMas4Eok Sep 4 17:56:33.959789 sshd[1559]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:56:33.970043 systemd-logind[1432]: New session 4 of user core. Sep 4 17:56:33.983832 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 4 17:56:34.689422 sshd[1559]: pam_unix(sshd:session): session closed for user core Sep 4 17:56:34.707048 systemd[1]: sshd@1-172.24.4.18:22-172.24.4.1:51424.service: Deactivated successfully. Sep 4 17:56:34.714583 systemd[1]: session-4.scope: Deactivated successfully. Sep 4 17:56:34.717957 systemd-logind[1432]: Session 4 logged out. Waiting for processes to exit. Sep 4 17:56:34.728333 coreos-metadata[1420]: Sep 04 17:56:34.726 WARN failed to locate config-drive, using the metadata service API instead Sep 4 17:56:34.731509 systemd[1]: Started sshd@2-172.24.4.18:22-172.24.4.1:52212.service - OpenSSH per-connection server daemon (172.24.4.1:52212). Sep 4 17:56:34.734064 systemd-logind[1432]: Removed session 4. Sep 4 17:56:34.915970 coreos-metadata[1420]: Sep 04 17:56:34.915 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Sep 4 17:56:35.255167 coreos-metadata[1420]: Sep 04 17:56:35.255 INFO Fetch successful Sep 4 17:56:35.255812 coreos-metadata[1420]: Sep 04 17:56:35.255 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Sep 4 17:56:35.268553 coreos-metadata[1420]: Sep 04 17:56:35.268 INFO Fetch successful Sep 4 17:56:35.268997 coreos-metadata[1420]: Sep 04 17:56:35.268 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Sep 4 17:56:35.284969 coreos-metadata[1420]: Sep 04 17:56:35.284 INFO Fetch successful Sep 4 17:56:35.285314 coreos-metadata[1420]: Sep 04 17:56:35.285 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Sep 4 17:56:35.303016 coreos-metadata[1420]: Sep 04 17:56:35.302 INFO Fetch successful Sep 4 17:56:35.303353 coreos-metadata[1420]: Sep 04 17:56:35.303 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Sep 4 17:56:35.316108 coreos-metadata[1420]: Sep 04 17:56:35.316 INFO Fetch successful Sep 4 17:56:35.316711 coreos-metadata[1420]: Sep 04 17:56:35.316 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Sep 4 17:56:35.320816 coreos-metadata[1491]: Sep 04 17:56:35.320 WARN failed to locate config-drive, using the metadata service API instead Sep 4 17:56:35.327327 coreos-metadata[1420]: Sep 04 17:56:35.325 INFO Fetch successful Sep 4 17:56:35.365526 coreos-metadata[1491]: Sep 04 17:56:35.365 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Sep 4 17:56:35.377220 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 4 17:56:35.379815 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 4 17:56:35.380893 coreos-metadata[1491]: Sep 04 17:56:35.380 INFO Fetch successful Sep 4 17:56:35.380893 coreos-metadata[1491]: Sep 04 17:56:35.380 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Sep 4 17:56:35.400820 coreos-metadata[1491]: Sep 04 17:56:35.400 INFO Fetch successful Sep 4 17:56:35.412509 unknown[1491]: wrote ssh authorized keys file for user: core Sep 4 17:56:35.466049 update-ssh-keys[1606]: Updated "/home/core/.ssh/authorized_keys" Sep 4 17:56:35.467221 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 4 17:56:35.472512 systemd[1]: Finished sshkeys.service. Sep 4 17:56:35.474899 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 4 17:56:35.475512 systemd[1]: Startup finished in 1.114s (kernel) + 15.486s (initrd) + 11.314s (userspace) = 27.916s. Sep 4 17:56:36.113702 sshd[1596]: Accepted publickey for core from 172.24.4.1 port 52212 ssh2: RSA SHA256:JnA7Fh8lVkr6ENifNOXj431OPLJBOL+/PI8dMas4Eok Sep 4 17:56:36.116436 sshd[1596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:56:36.126354 systemd-logind[1432]: New session 5 of user core. Sep 4 17:56:36.138610 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 4 17:56:36.933812 sshd[1596]: pam_unix(sshd:session): session closed for user core Sep 4 17:56:36.941803 systemd-logind[1432]: Session 5 logged out. Waiting for processes to exit. Sep 4 17:56:36.943433 systemd[1]: sshd@2-172.24.4.18:22-172.24.4.1:52212.service: Deactivated successfully. Sep 4 17:56:36.947063 systemd[1]: session-5.scope: Deactivated successfully. Sep 4 17:56:36.949318 systemd-logind[1432]: Removed session 5. Sep 4 17:56:42.547970 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 4 17:56:42.556665 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:56:42.975307 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:56:42.992788 (kubelet)[1622]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:56:43.134397 kubelet[1622]: E0904 17:56:43.134220 1622 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:56:43.141523 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:56:43.141832 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:56:46.954909 systemd[1]: Started sshd@3-172.24.4.18:22-172.24.4.1:38882.service - OpenSSH per-connection server daemon (172.24.4.1:38882). Sep 4 17:56:48.576579 sshd[1630]: Accepted publickey for core from 172.24.4.1 port 38882 ssh2: RSA SHA256:JnA7Fh8lVkr6ENifNOXj431OPLJBOL+/PI8dMas4Eok Sep 4 17:56:48.579957 sshd[1630]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:56:48.590841 systemd-logind[1432]: New session 6 of user core. Sep 4 17:56:48.602615 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 4 17:56:49.197715 sshd[1630]: pam_unix(sshd:session): session closed for user core Sep 4 17:56:49.213117 systemd[1]: sshd@3-172.24.4.18:22-172.24.4.1:38882.service: Deactivated successfully. Sep 4 17:56:49.217585 systemd[1]: session-6.scope: Deactivated successfully. Sep 4 17:56:49.221825 systemd-logind[1432]: Session 6 logged out. Waiting for processes to exit. Sep 4 17:56:49.233073 systemd[1]: Started sshd@4-172.24.4.18:22-172.24.4.1:38890.service - OpenSSH per-connection server daemon (172.24.4.1:38890). Sep 4 17:56:49.237049 systemd-logind[1432]: Removed session 6. Sep 4 17:56:50.739177 sshd[1637]: Accepted publickey for core from 172.24.4.1 port 38890 ssh2: RSA SHA256:JnA7Fh8lVkr6ENifNOXj431OPLJBOL+/PI8dMas4Eok Sep 4 17:56:50.742060 sshd[1637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:56:50.752878 systemd-logind[1432]: New session 7 of user core. Sep 4 17:56:50.760525 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 4 17:56:51.348789 sshd[1637]: pam_unix(sshd:session): session closed for user core Sep 4 17:56:51.363683 systemd[1]: sshd@4-172.24.4.18:22-172.24.4.1:38890.service: Deactivated successfully. Sep 4 17:56:51.367691 systemd[1]: session-7.scope: Deactivated successfully. Sep 4 17:56:51.369556 systemd-logind[1432]: Session 7 logged out. Waiting for processes to exit. Sep 4 17:56:51.376850 systemd[1]: Started sshd@5-172.24.4.18:22-172.24.4.1:38898.service - OpenSSH per-connection server daemon (172.24.4.1:38898). Sep 4 17:56:51.379352 systemd-logind[1432]: Removed session 7. Sep 4 17:56:52.601752 sshd[1644]: Accepted publickey for core from 172.24.4.1 port 38898 ssh2: RSA SHA256:JnA7Fh8lVkr6ENifNOXj431OPLJBOL+/PI8dMas4Eok Sep 4 17:56:52.604735 sshd[1644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:56:52.615507 systemd-logind[1432]: New session 8 of user core. Sep 4 17:56:52.624595 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 4 17:56:53.169778 sshd[1644]: pam_unix(sshd:session): session closed for user core Sep 4 17:56:53.183755 systemd[1]: sshd@5-172.24.4.18:22-172.24.4.1:38898.service: Deactivated successfully. Sep 4 17:56:53.187117 systemd[1]: session-8.scope: Deactivated successfully. Sep 4 17:56:53.189199 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 4 17:56:53.192513 systemd-logind[1432]: Session 8 logged out. Waiting for processes to exit. Sep 4 17:56:53.197730 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:56:53.200902 systemd[1]: Started sshd@6-172.24.4.18:22-172.24.4.1:38908.service - OpenSSH per-connection server daemon (172.24.4.1:38908). Sep 4 17:56:53.208113 systemd-logind[1432]: Removed session 8. Sep 4 17:56:53.826672 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:56:53.839935 (kubelet)[1661]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:56:53.945991 kubelet[1661]: E0904 17:56:53.945946 1661 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:56:53.948833 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:56:53.949035 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:56:54.445954 sshd[1652]: Accepted publickey for core from 172.24.4.1 port 38908 ssh2: RSA SHA256:JnA7Fh8lVkr6ENifNOXj431OPLJBOL+/PI8dMas4Eok Sep 4 17:56:54.449509 sshd[1652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:56:54.462373 systemd-logind[1432]: New session 9 of user core. Sep 4 17:56:54.469590 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 4 17:56:54.788760 sudo[1670]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 4 17:56:54.789840 sudo[1670]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 17:56:54.813754 sudo[1670]: pam_unix(sudo:session): session closed for user root Sep 4 17:56:55.086824 sshd[1652]: pam_unix(sshd:session): session closed for user core Sep 4 17:56:55.098690 systemd[1]: sshd@6-172.24.4.18:22-172.24.4.1:38908.service: Deactivated successfully. Sep 4 17:56:55.103454 systemd[1]: session-9.scope: Deactivated successfully. Sep 4 17:56:55.106002 systemd-logind[1432]: Session 9 logged out. Waiting for processes to exit. Sep 4 17:56:55.114027 systemd[1]: Started sshd@7-172.24.4.18:22-172.24.4.1:52884.service - OpenSSH per-connection server daemon (172.24.4.1:52884). Sep 4 17:56:55.119400 systemd-logind[1432]: Removed session 9. Sep 4 17:56:56.166851 sshd[1675]: Accepted publickey for core from 172.24.4.1 port 52884 ssh2: RSA SHA256:JnA7Fh8lVkr6ENifNOXj431OPLJBOL+/PI8dMas4Eok Sep 4 17:56:56.170369 sshd[1675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:56:56.180859 systemd-logind[1432]: New session 10 of user core. Sep 4 17:56:56.194596 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 4 17:56:56.514693 sudo[1679]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 4 17:56:56.515410 sudo[1679]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 17:56:56.523418 sudo[1679]: pam_unix(sudo:session): session closed for user root Sep 4 17:56:56.535230 sudo[1678]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 4 17:56:56.536660 sudo[1678]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 17:56:56.562903 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 4 17:56:56.578888 auditctl[1682]: No rules Sep 4 17:56:56.580299 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 17:56:56.580749 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 4 17:56:56.589126 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 4 17:56:56.653952 augenrules[1700]: No rules Sep 4 17:56:56.656428 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 4 17:56:56.659466 sudo[1678]: pam_unix(sudo:session): session closed for user root Sep 4 17:56:56.861615 sshd[1675]: pam_unix(sshd:session): session closed for user core Sep 4 17:56:56.872988 systemd[1]: sshd@7-172.24.4.18:22-172.24.4.1:52884.service: Deactivated successfully. Sep 4 17:56:56.876072 systemd[1]: session-10.scope: Deactivated successfully. Sep 4 17:56:56.879637 systemd-logind[1432]: Session 10 logged out. Waiting for processes to exit. Sep 4 17:56:56.885880 systemd[1]: Started sshd@8-172.24.4.18:22-172.24.4.1:52898.service - OpenSSH per-connection server daemon (172.24.4.1:52898). Sep 4 17:56:56.888980 systemd-logind[1432]: Removed session 10. Sep 4 17:56:58.130494 sshd[1708]: Accepted publickey for core from 172.24.4.1 port 52898 ssh2: RSA SHA256:JnA7Fh8lVkr6ENifNOXj431OPLJBOL+/PI8dMas4Eok Sep 4 17:56:58.134570 sshd[1708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:56:58.144109 systemd-logind[1432]: New session 11 of user core. Sep 4 17:56:58.156541 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 4 17:56:58.657979 sudo[1711]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 4 17:56:58.659643 sudo[1711]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 17:56:58.928793 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 4 17:56:58.931112 (dockerd)[1720]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 4 17:56:59.435357 systemd-timesyncd[1337]: Contacted time server 162.159.200.1:123 (2.flatcar.pool.ntp.org). Sep 4 17:56:59.435800 systemd-timesyncd[1337]: Initial clock synchronization to Wed 2024-09-04 17:56:59.694455 UTC. Sep 4 17:56:59.572520 dockerd[1720]: time="2024-09-04T17:56:59.572415141Z" level=info msg="Starting up" Sep 4 17:56:59.824778 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport547685435-merged.mount: Deactivated successfully. Sep 4 17:56:59.911694 dockerd[1720]: time="2024-09-04T17:56:59.911366795Z" level=info msg="Loading containers: start." Sep 4 17:57:00.122410 kernel: Initializing XFRM netlink socket Sep 4 17:57:00.279453 systemd-networkd[1366]: docker0: Link UP Sep 4 17:57:00.305865 dockerd[1720]: time="2024-09-04T17:57:00.305658631Z" level=info msg="Loading containers: done." Sep 4 17:57:00.335989 dockerd[1720]: time="2024-09-04T17:57:00.335900243Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 4 17:57:00.336246 dockerd[1720]: time="2024-09-04T17:57:00.336141043Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 4 17:57:00.336529 dockerd[1720]: time="2024-09-04T17:57:00.336485893Z" level=info msg="Daemon has completed initialization" Sep 4 17:57:00.388476 dockerd[1720]: time="2024-09-04T17:57:00.387874130Z" level=info msg="API listen on /run/docker.sock" Sep 4 17:57:00.388599 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 4 17:57:02.228321 containerd[1447]: time="2024-09-04T17:57:02.228207967Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.4\"" Sep 4 17:57:03.028645 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3227160219.mount: Deactivated successfully. Sep 4 17:57:04.048968 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 4 17:57:04.057410 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:57:04.306422 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:57:04.311399 (kubelet)[1919]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:57:04.610363 kubelet[1919]: E0904 17:57:04.609997 1919 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:57:04.617933 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:57:04.618619 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:57:06.744530 containerd[1447]: time="2024-09-04T17:57:06.743635795Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:57:06.746401 containerd[1447]: time="2024-09-04T17:57:06.746049797Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.4: active requests=0, bytes read=32772424" Sep 4 17:57:06.747724 containerd[1447]: time="2024-09-04T17:57:06.747645239Z" level=info msg="ImageCreate event name:\"sha256:8a97b1fb3e2ebd03bf97ce8ae894b3dc8a68ab1f4ecfd0a284921c45c56f5aa4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:57:06.751681 containerd[1447]: time="2024-09-04T17:57:06.751628468Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:7b0c4a959aaee5660e1234452dc3123310231b9f92d29ebd175c86dc9f797ee7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:57:06.754609 containerd[1447]: time="2024-09-04T17:57:06.754179795Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.4\" with image id \"sha256:8a97b1fb3e2ebd03bf97ce8ae894b3dc8a68ab1f4ecfd0a284921c45c56f5aa4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:7b0c4a959aaee5660e1234452dc3123310231b9f92d29ebd175c86dc9f797ee7\", size \"32769216\" in 4.525759126s" Sep 4 17:57:06.754609 containerd[1447]: time="2024-09-04T17:57:06.754325161Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.4\" returns image reference \"sha256:8a97b1fb3e2ebd03bf97ce8ae894b3dc8a68ab1f4ecfd0a284921c45c56f5aa4\"" Sep 4 17:57:06.783908 containerd[1447]: time="2024-09-04T17:57:06.783596078Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.4\"" Sep 4 17:57:09.111771 containerd[1447]: time="2024-09-04T17:57:09.111597369Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:57:09.113604 containerd[1447]: time="2024-09-04T17:57:09.113353467Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.4: active requests=0, bytes read=29594073" Sep 4 17:57:09.114674 containerd[1447]: time="2024-09-04T17:57:09.114599911Z" level=info msg="ImageCreate event name:\"sha256:8398ad49a121d58ecf8a36e8371c0928fdf75eb0a83d28232ab2b39b1c6a9050\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:57:09.118387 containerd[1447]: time="2024-09-04T17:57:09.118225311Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:992cccbf652fa951c1a3d41b0c1033ae0bf64f33da03d50395282c551900af9e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:57:09.119894 containerd[1447]: time="2024-09-04T17:57:09.119584131Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.4\" with image id \"sha256:8398ad49a121d58ecf8a36e8371c0928fdf75eb0a83d28232ab2b39b1c6a9050\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:992cccbf652fa951c1a3d41b0c1033ae0bf64f33da03d50395282c551900af9e\", size \"31144011\" in 2.33592114s" Sep 4 17:57:09.119894 containerd[1447]: time="2024-09-04T17:57:09.119623292Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.4\" returns image reference \"sha256:8398ad49a121d58ecf8a36e8371c0928fdf75eb0a83d28232ab2b39b1c6a9050\"" Sep 4 17:57:09.153239 containerd[1447]: time="2024-09-04T17:57:09.153198535Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.4\"" Sep 4 17:57:11.745607 containerd[1447]: time="2024-09-04T17:57:11.744653369Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:57:11.747331 containerd[1447]: time="2024-09-04T17:57:11.747269455Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.4: active requests=0, bytes read=17780241" Sep 4 17:57:11.748746 containerd[1447]: time="2024-09-04T17:57:11.748600123Z" level=info msg="ImageCreate event name:\"sha256:4939f82ab9ab456e782c06ed37b245127c8a9ac29a72982346a7160f18107833\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:57:11.753218 containerd[1447]: time="2024-09-04T17:57:11.753160765Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:37eaeee5bca8da34ad3d36e37586dd29f5edb1e2927e7644dfb113e70062bda8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:57:11.758043 containerd[1447]: time="2024-09-04T17:57:11.756644190Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.4\" with image id \"sha256:4939f82ab9ab456e782c06ed37b245127c8a9ac29a72982346a7160f18107833\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:37eaeee5bca8da34ad3d36e37586dd29f5edb1e2927e7644dfb113e70062bda8\", size \"19330197\" in 2.603217432s" Sep 4 17:57:11.758043 containerd[1447]: time="2024-09-04T17:57:11.756682092Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.4\" returns image reference \"sha256:4939f82ab9ab456e782c06ed37b245127c8a9ac29a72982346a7160f18107833\"" Sep 4 17:57:11.795039 containerd[1447]: time="2024-09-04T17:57:11.794972847Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.4\"" Sep 4 17:57:12.574233 update_engine[1433]: I0904 17:57:12.573328 1433 update_attempter.cc:509] Updating boot flags... Sep 4 17:57:12.619288 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1964) Sep 4 17:57:12.704296 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1966) Sep 4 17:57:13.385622 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3191396472.mount: Deactivated successfully. Sep 4 17:57:14.030771 containerd[1447]: time="2024-09-04T17:57:14.030619931Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:57:14.035787 containerd[1447]: time="2024-09-04T17:57:14.035675270Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.4: active requests=0, bytes read=29037169" Sep 4 17:57:14.038351 containerd[1447]: time="2024-09-04T17:57:14.038153400Z" level=info msg="ImageCreate event name:\"sha256:568d5ba88d944bcd67415d8c358fce615824410f3a43bab2b353336bc3795a10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:57:14.044048 containerd[1447]: time="2024-09-04T17:57:14.043892128Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:33ee1df1ba70e41bf9506d54bb5e64ef5f3ba9fc1b3021aaa4468606a7802acc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:57:14.046363 containerd[1447]: time="2024-09-04T17:57:14.046053620Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.4\" with image id \"sha256:568d5ba88d944bcd67415d8c358fce615824410f3a43bab2b353336bc3795a10\", repo tag \"registry.k8s.io/kube-proxy:v1.30.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:33ee1df1ba70e41bf9506d54bb5e64ef5f3ba9fc1b3021aaa4468606a7802acc\", size \"29036180\" in 2.25099536s" Sep 4 17:57:14.046363 containerd[1447]: time="2024-09-04T17:57:14.046126066Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.4\" returns image reference \"sha256:568d5ba88d944bcd67415d8c358fce615824410f3a43bab2b353336bc3795a10\"" Sep 4 17:57:14.106124 containerd[1447]: time="2024-09-04T17:57:14.106022047Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Sep 4 17:57:14.798745 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 4 17:57:14.810321 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:57:14.882220 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4195829282.mount: Deactivated successfully. Sep 4 17:57:14.966628 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:57:14.968116 (kubelet)[1996]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:57:15.019142 kubelet[1996]: E0904 17:57:15.019101 1996 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:57:15.021774 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:57:15.021940 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:57:17.025295 containerd[1447]: time="2024-09-04T17:57:17.023073820Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:57:17.025295 containerd[1447]: time="2024-09-04T17:57:17.024954925Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Sep 4 17:57:17.026633 containerd[1447]: time="2024-09-04T17:57:17.026578819Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:57:17.036409 containerd[1447]: time="2024-09-04T17:57:17.036312150Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:57:17.039217 containerd[1447]: time="2024-09-04T17:57:17.039151087Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.933034728s" Sep 4 17:57:17.039443 containerd[1447]: time="2024-09-04T17:57:17.039403925Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Sep 4 17:57:17.097536 containerd[1447]: time="2024-09-04T17:57:17.097469890Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Sep 4 17:57:17.693049 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1643607093.mount: Deactivated successfully. Sep 4 17:57:17.704094 containerd[1447]: time="2024-09-04T17:57:17.703933852Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:57:17.706598 containerd[1447]: time="2024-09-04T17:57:17.706499053Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Sep 4 17:57:17.708329 containerd[1447]: time="2024-09-04T17:57:17.708188508Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:57:17.717289 containerd[1447]: time="2024-09-04T17:57:17.716191672Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:57:17.722999 containerd[1447]: time="2024-09-04T17:57:17.722917920Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 625.04332ms" Sep 4 17:57:17.722999 containerd[1447]: time="2024-09-04T17:57:17.722998409Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Sep 4 17:57:17.767904 containerd[1447]: time="2024-09-04T17:57:17.767849688Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Sep 4 17:57:18.443959 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2771683392.mount: Deactivated successfully. Sep 4 17:57:21.867094 containerd[1447]: time="2024-09-04T17:57:21.867038662Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:57:21.873090 containerd[1447]: time="2024-09-04T17:57:21.873000620Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238579" Sep 4 17:57:21.877636 containerd[1447]: time="2024-09-04T17:57:21.877547102Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:57:21.882611 containerd[1447]: time="2024-09-04T17:57:21.882404335Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:57:21.885405 containerd[1447]: time="2024-09-04T17:57:21.885086389Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 4.117163368s" Sep 4 17:57:21.885405 containerd[1447]: time="2024-09-04T17:57:21.885163724Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Sep 4 17:57:25.048207 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Sep 4 17:57:25.064401 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:57:25.297452 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:57:25.301829 (kubelet)[2170]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:57:25.695807 kubelet[2170]: E0904 17:57:25.695558 2170 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:57:25.698663 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:57:25.698915 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:57:26.279162 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:57:26.287683 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:57:26.316439 systemd[1]: Reloading requested from client PID 2185 ('systemctl') (unit session-11.scope)... Sep 4 17:57:26.316578 systemd[1]: Reloading... Sep 4 17:57:26.407291 zram_generator::config[2219]: No configuration found. Sep 4 17:57:26.680583 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:57:26.766164 systemd[1]: Reloading finished in 447 ms. Sep 4 17:57:26.822326 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 4 17:57:26.822398 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 4 17:57:26.822747 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:57:26.824642 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:57:27.082544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:57:27.083570 (kubelet)[2287]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 17:57:27.470068 kubelet[2287]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:57:27.471338 kubelet[2287]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 17:57:27.471338 kubelet[2287]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:57:27.471338 kubelet[2287]: I0904 17:57:27.470972 2287 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 17:57:28.151081 kubelet[2287]: I0904 17:57:28.151007 2287 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Sep 4 17:57:28.151081 kubelet[2287]: I0904 17:57:28.151075 2287 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 17:57:28.151623 kubelet[2287]: I0904 17:57:28.151581 2287 server.go:927] "Client rotation is on, will bootstrap in background" Sep 4 17:57:28.183681 kubelet[2287]: E0904 17:57:28.183640 2287 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.18:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.18:6443: connect: connection refused Sep 4 17:57:28.184108 kubelet[2287]: I0904 17:57:28.183990 2287 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:57:28.207377 kubelet[2287]: I0904 17:57:28.207290 2287 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 17:57:28.207867 kubelet[2287]: I0904 17:57:28.207783 2287 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 17:57:28.208332 kubelet[2287]: I0904 17:57:28.207856 2287 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4054-1-0-2-9cde805234.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Sep 4 17:57:28.208505 kubelet[2287]: I0904 17:57:28.208343 2287 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 17:57:28.208505 kubelet[2287]: I0904 17:57:28.208369 2287 container_manager_linux.go:301] "Creating device plugin manager" Sep 4 17:57:28.208643 kubelet[2287]: I0904 17:57:28.208608 2287 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:57:28.210526 kubelet[2287]: I0904 17:57:28.210500 2287 kubelet.go:400] "Attempting to sync node with API server" Sep 4 17:57:28.210625 kubelet[2287]: I0904 17:57:28.210539 2287 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 17:57:28.210625 kubelet[2287]: I0904 17:57:28.210583 2287 kubelet.go:312] "Adding apiserver pod source" Sep 4 17:57:28.210625 kubelet[2287]: I0904 17:57:28.210616 2287 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 17:57:28.230278 kubelet[2287]: W0904 17:57:28.229999 2287 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4054-1-0-2-9cde805234.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.18:6443: connect: connection refused Sep 4 17:57:28.230722 kubelet[2287]: E0904 17:57:28.230532 2287 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4054-1-0-2-9cde805234.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.18:6443: connect: connection refused Sep 4 17:57:28.231161 kubelet[2287]: I0904 17:57:28.231061 2287 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.20" apiVersion="v1" Sep 4 17:57:28.239306 kubelet[2287]: I0904 17:57:28.238470 2287 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 17:57:28.239306 kubelet[2287]: W0904 17:57:28.238583 2287 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 4 17:57:28.239756 kubelet[2287]: I0904 17:57:28.239709 2287 server.go:1264] "Started kubelet" Sep 4 17:57:28.244125 kubelet[2287]: I0904 17:57:28.243214 2287 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 17:57:28.245680 kubelet[2287]: I0904 17:57:28.245115 2287 server.go:455] "Adding debug handlers to kubelet server" Sep 4 17:57:28.248685 kubelet[2287]: W0904 17:57:28.246164 2287 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.18:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.18:6443: connect: connection refused Sep 4 17:57:28.248685 kubelet[2287]: E0904 17:57:28.246329 2287 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.18:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.18:6443: connect: connection refused Sep 4 17:57:28.248685 kubelet[2287]: I0904 17:57:28.246842 2287 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 17:57:28.248685 kubelet[2287]: I0904 17:57:28.247233 2287 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 17:57:28.258263 kubelet[2287]: I0904 17:57:28.258216 2287 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 17:57:28.259928 kubelet[2287]: E0904 17:57:28.259808 2287 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.18:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.18:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4054-1-0-2-9cde805234.novalocal.17f21c3b4bfa4c83 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4054-1-0-2-9cde805234.novalocal,UID:ci-4054-1-0-2-9cde805234.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4054-1-0-2-9cde805234.novalocal,},FirstTimestamp:2024-09-04 17:57:28.239664259 +0000 UTC m=+1.150693320,LastTimestamp:2024-09-04 17:57:28.239664259 +0000 UTC m=+1.150693320,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4054-1-0-2-9cde805234.novalocal,}" Sep 4 17:57:28.260189 kubelet[2287]: E0904 17:57:28.260173 2287 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 17:57:28.263074 kubelet[2287]: E0904 17:57:28.263035 2287 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4054-1-0-2-9cde805234.novalocal\" not found" Sep 4 17:57:28.263170 kubelet[2287]: I0904 17:57:28.263160 2287 volume_manager.go:291] "Starting Kubelet Volume Manager" Sep 4 17:57:28.263345 kubelet[2287]: I0904 17:57:28.263332 2287 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Sep 4 17:57:28.263463 kubelet[2287]: I0904 17:57:28.263453 2287 reconciler.go:26] "Reconciler: start to sync state" Sep 4 17:57:28.264217 kubelet[2287]: W0904 17:57:28.263871 2287 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.18:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.18:6443: connect: connection refused Sep 4 17:57:28.264217 kubelet[2287]: E0904 17:57:28.263915 2287 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.18:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.18:6443: connect: connection refused Sep 4 17:57:28.264217 kubelet[2287]: E0904 17:57:28.264098 2287 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4054-1-0-2-9cde805234.novalocal?timeout=10s\": dial tcp 172.24.4.18:6443: connect: connection refused" interval="200ms" Sep 4 17:57:28.264931 kubelet[2287]: I0904 17:57:28.264736 2287 factory.go:221] Registration of the systemd container factory successfully Sep 4 17:57:28.264931 kubelet[2287]: I0904 17:57:28.264810 2287 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 17:57:28.266970 kubelet[2287]: I0904 17:57:28.266099 2287 factory.go:221] Registration of the containerd container factory successfully Sep 4 17:57:28.279929 kubelet[2287]: I0904 17:57:28.279440 2287 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 17:57:28.280522 kubelet[2287]: I0904 17:57:28.280497 2287 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 17:57:28.280570 kubelet[2287]: I0904 17:57:28.280546 2287 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 17:57:28.280606 kubelet[2287]: I0904 17:57:28.280579 2287 kubelet.go:2337] "Starting kubelet main sync loop" Sep 4 17:57:28.280670 kubelet[2287]: E0904 17:57:28.280636 2287 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 17:57:28.288172 kubelet[2287]: W0904 17:57:28.288121 2287 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.18:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.18:6443: connect: connection refused Sep 4 17:57:28.288303 kubelet[2287]: E0904 17:57:28.288178 2287 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.18:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.18:6443: connect: connection refused Sep 4 17:57:28.304030 kubelet[2287]: I0904 17:57:28.304003 2287 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 17:57:28.304030 kubelet[2287]: I0904 17:57:28.304022 2287 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 17:57:28.304030 kubelet[2287]: I0904 17:57:28.304040 2287 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:57:28.309330 kubelet[2287]: I0904 17:57:28.309303 2287 policy_none.go:49] "None policy: Start" Sep 4 17:57:28.310278 kubelet[2287]: I0904 17:57:28.309886 2287 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 4 17:57:28.310278 kubelet[2287]: I0904 17:57:28.309910 2287 state_mem.go:35] "Initializing new in-memory state store" Sep 4 17:57:28.319303 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 4 17:57:28.338821 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 4 17:57:28.343876 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 4 17:57:28.359742 kubelet[2287]: I0904 17:57:28.359617 2287 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 17:57:28.360377 kubelet[2287]: I0904 17:57:28.360158 2287 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 4 17:57:28.360377 kubelet[2287]: I0904 17:57:28.360316 2287 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 17:57:28.363643 kubelet[2287]: E0904 17:57:28.363552 2287 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4054-1-0-2-9cde805234.novalocal\" not found" Sep 4 17:57:28.365881 kubelet[2287]: I0904 17:57:28.365849 2287 kubelet_node_status.go:73] "Attempting to register node" node="ci-4054-1-0-2-9cde805234.novalocal" Sep 4 17:57:28.366384 kubelet[2287]: E0904 17:57:28.366350 2287 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.18:6443/api/v1/nodes\": dial tcp 172.24.4.18:6443: connect: connection refused" node="ci-4054-1-0-2-9cde805234.novalocal" Sep 4 17:57:28.381472 kubelet[2287]: I0904 17:57:28.381428 2287 topology_manager.go:215] "Topology Admit Handler" podUID="1c364d467e6fe544a8d0ca77c32769d8" podNamespace="kube-system" podName="kube-apiserver-ci-4054-1-0-2-9cde805234.novalocal" Sep 4 17:57:28.383150 kubelet[2287]: I0904 17:57:28.382970 2287 topology_manager.go:215] "Topology Admit Handler" podUID="985f8e43dccfce6e9241e01532efeb0c" podNamespace="kube-system" podName="kube-controller-manager-ci-4054-1-0-2-9cde805234.novalocal" Sep 4 17:57:28.384600 kubelet[2287]: I0904 17:57:28.384316 2287 topology_manager.go:215] "Topology Admit Handler" podUID="2cf4ff88d2a4eafaf4bef239330843be" podNamespace="kube-system" podName="kube-scheduler-ci-4054-1-0-2-9cde805234.novalocal" Sep 4 17:57:28.393861 systemd[1]: Created slice kubepods-burstable-pod1c364d467e6fe544a8d0ca77c32769d8.slice - libcontainer container kubepods-burstable-pod1c364d467e6fe544a8d0ca77c32769d8.slice. Sep 4 17:57:28.409738 systemd[1]: Created slice kubepods-burstable-pod985f8e43dccfce6e9241e01532efeb0c.slice - libcontainer container kubepods-burstable-pod985f8e43dccfce6e9241e01532efeb0c.slice. Sep 4 17:57:28.431583 systemd[1]: Created slice kubepods-burstable-pod2cf4ff88d2a4eafaf4bef239330843be.slice - libcontainer container kubepods-burstable-pod2cf4ff88d2a4eafaf4bef239330843be.slice. Sep 4 17:57:28.464140 kubelet[2287]: I0904 17:57:28.464065 2287 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1c364d467e6fe544a8d0ca77c32769d8-ca-certs\") pod \"kube-apiserver-ci-4054-1-0-2-9cde805234.novalocal\" (UID: \"1c364d467e6fe544a8d0ca77c32769d8\") " pod="kube-system/kube-apiserver-ci-4054-1-0-2-9cde805234.novalocal" Sep 4 17:57:28.465060 kubelet[2287]: E0904 17:57:28.464972 2287 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4054-1-0-2-9cde805234.novalocal?timeout=10s\": dial tcp 172.24.4.18:6443: connect: connection refused" interval="400ms" Sep 4 17:57:28.565434 kubelet[2287]: I0904 17:57:28.564790 2287 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/985f8e43dccfce6e9241e01532efeb0c-kubeconfig\") pod \"kube-controller-manager-ci-4054-1-0-2-9cde805234.novalocal\" (UID: \"985f8e43dccfce6e9241e01532efeb0c\") " pod="kube-system/kube-controller-manager-ci-4054-1-0-2-9cde805234.novalocal" Sep 4 17:57:28.565434 kubelet[2287]: I0904 17:57:28.564888 2287 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/985f8e43dccfce6e9241e01532efeb0c-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4054-1-0-2-9cde805234.novalocal\" (UID: \"985f8e43dccfce6e9241e01532efeb0c\") " pod="kube-system/kube-controller-manager-ci-4054-1-0-2-9cde805234.novalocal" Sep 4 17:57:28.565434 kubelet[2287]: I0904 17:57:28.564946 2287 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2cf4ff88d2a4eafaf4bef239330843be-kubeconfig\") pod \"kube-scheduler-ci-4054-1-0-2-9cde805234.novalocal\" (UID: \"2cf4ff88d2a4eafaf4bef239330843be\") " pod="kube-system/kube-scheduler-ci-4054-1-0-2-9cde805234.novalocal" Sep 4 17:57:28.565434 kubelet[2287]: I0904 17:57:28.565038 2287 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1c364d467e6fe544a8d0ca77c32769d8-k8s-certs\") pod \"kube-apiserver-ci-4054-1-0-2-9cde805234.novalocal\" (UID: \"1c364d467e6fe544a8d0ca77c32769d8\") " pod="kube-system/kube-apiserver-ci-4054-1-0-2-9cde805234.novalocal" Sep 4 17:57:28.566611 kubelet[2287]: I0904 17:57:28.565086 2287 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1c364d467e6fe544a8d0ca77c32769d8-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4054-1-0-2-9cde805234.novalocal\" (UID: \"1c364d467e6fe544a8d0ca77c32769d8\") " pod="kube-system/kube-apiserver-ci-4054-1-0-2-9cde805234.novalocal" Sep 4 17:57:28.566611 kubelet[2287]: I0904 17:57:28.565132 2287 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/985f8e43dccfce6e9241e01532efeb0c-ca-certs\") pod \"kube-controller-manager-ci-4054-1-0-2-9cde805234.novalocal\" (UID: \"985f8e43dccfce6e9241e01532efeb0c\") " pod="kube-system/kube-controller-manager-ci-4054-1-0-2-9cde805234.novalocal" Sep 4 17:57:28.566611 kubelet[2287]: I0904 17:57:28.565177 2287 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/985f8e43dccfce6e9241e01532efeb0c-flexvolume-dir\") pod \"kube-controller-manager-ci-4054-1-0-2-9cde805234.novalocal\" (UID: \"985f8e43dccfce6e9241e01532efeb0c\") " pod="kube-system/kube-controller-manager-ci-4054-1-0-2-9cde805234.novalocal" Sep 4 17:57:28.566611 kubelet[2287]: I0904 17:57:28.565223 2287 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/985f8e43dccfce6e9241e01532efeb0c-k8s-certs\") pod \"kube-controller-manager-ci-4054-1-0-2-9cde805234.novalocal\" (UID: \"985f8e43dccfce6e9241e01532efeb0c\") " pod="kube-system/kube-controller-manager-ci-4054-1-0-2-9cde805234.novalocal" Sep 4 17:57:28.570332 kubelet[2287]: I0904 17:57:28.570122 2287 kubelet_node_status.go:73] "Attempting to register node" node="ci-4054-1-0-2-9cde805234.novalocal" Sep 4 17:57:28.571227 kubelet[2287]: E0904 17:57:28.571167 2287 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.18:6443/api/v1/nodes\": dial tcp 172.24.4.18:6443: connect: connection refused" node="ci-4054-1-0-2-9cde805234.novalocal" Sep 4 17:57:28.706654 containerd[1447]: time="2024-09-04T17:57:28.706173217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4054-1-0-2-9cde805234.novalocal,Uid:1c364d467e6fe544a8d0ca77c32769d8,Namespace:kube-system,Attempt:0,}" Sep 4 17:57:28.736358 containerd[1447]: time="2024-09-04T17:57:28.735444615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4054-1-0-2-9cde805234.novalocal,Uid:985f8e43dccfce6e9241e01532efeb0c,Namespace:kube-system,Attempt:0,}" Sep 4 17:57:28.747318 containerd[1447]: time="2024-09-04T17:57:28.747198370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4054-1-0-2-9cde805234.novalocal,Uid:2cf4ff88d2a4eafaf4bef239330843be,Namespace:kube-system,Attempt:0,}" Sep 4 17:57:28.866230 kubelet[2287]: E0904 17:57:28.866130 2287 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4054-1-0-2-9cde805234.novalocal?timeout=10s\": dial tcp 172.24.4.18:6443: connect: connection refused" interval="800ms" Sep 4 17:57:28.975313 kubelet[2287]: I0904 17:57:28.974720 2287 kubelet_node_status.go:73] "Attempting to register node" node="ci-4054-1-0-2-9cde805234.novalocal" Sep 4 17:57:28.977002 kubelet[2287]: E0904 17:57:28.976816 2287 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.18:6443/api/v1/nodes\": dial tcp 172.24.4.18:6443: connect: connection refused" node="ci-4054-1-0-2-9cde805234.novalocal" Sep 4 17:57:29.129834 kubelet[2287]: W0904 17:57:29.129536 2287 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4054-1-0-2-9cde805234.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.18:6443: connect: connection refused Sep 4 17:57:29.130192 kubelet[2287]: E0904 17:57:29.129772 2287 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4054-1-0-2-9cde805234.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.18:6443: connect: connection refused Sep 4 17:57:29.259675 kubelet[2287]: W0904 17:57:29.259414 2287 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.18:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.18:6443: connect: connection refused Sep 4 17:57:29.259675 kubelet[2287]: E0904 17:57:29.259526 2287 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.18:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.18:6443: connect: connection refused Sep 4 17:57:29.395701 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount178365850.mount: Deactivated successfully. Sep 4 17:57:29.410291 containerd[1447]: time="2024-09-04T17:57:29.408582581Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:57:29.411003 containerd[1447]: time="2024-09-04T17:57:29.410894515Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 17:57:29.412617 containerd[1447]: time="2024-09-04T17:57:29.412559550Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:57:29.414730 containerd[1447]: time="2024-09-04T17:57:29.414675294Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:57:29.416426 containerd[1447]: time="2024-09-04T17:57:29.416340780Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Sep 4 17:57:29.417980 containerd[1447]: time="2024-09-04T17:57:29.417896415Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 17:57:29.418146 containerd[1447]: time="2024-09-04T17:57:29.418082450Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:57:29.427972 containerd[1447]: time="2024-09-04T17:57:29.427869792Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:57:29.431628 containerd[1447]: time="2024-09-04T17:57:29.431548600Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 695.828538ms" Sep 4 17:57:29.435234 containerd[1447]: time="2024-09-04T17:57:29.435177331Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 687.785121ms" Sep 4 17:57:29.439712 containerd[1447]: time="2024-09-04T17:57:29.439658364Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 733.22444ms" Sep 4 17:57:29.470750 kubelet[2287]: W0904 17:57:29.470575 2287 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.18:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.18:6443: connect: connection refused Sep 4 17:57:29.470750 kubelet[2287]: E0904 17:57:29.470706 2287 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.18:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.18:6443: connect: connection refused Sep 4 17:57:29.667288 kubelet[2287]: E0904 17:57:29.667005 2287 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4054-1-0-2-9cde805234.novalocal?timeout=10s\": dial tcp 172.24.4.18:6443: connect: connection refused" interval="1.6s" Sep 4 17:57:29.775284 kubelet[2287]: W0904 17:57:29.773753 2287 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.18:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.18:6443: connect: connection refused Sep 4 17:57:29.775284 kubelet[2287]: E0904 17:57:29.773884 2287 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.18:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.18:6443: connect: connection refused Sep 4 17:57:29.781917 kubelet[2287]: I0904 17:57:29.781870 2287 kubelet_node_status.go:73] "Attempting to register node" node="ci-4054-1-0-2-9cde805234.novalocal" Sep 4 17:57:29.782567 kubelet[2287]: E0904 17:57:29.782520 2287 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.18:6443/api/v1/nodes\": dial tcp 172.24.4.18:6443: connect: connection refused" node="ci-4054-1-0-2-9cde805234.novalocal" Sep 4 17:57:29.785004 containerd[1447]: time="2024-09-04T17:57:29.784848825Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:57:29.785004 containerd[1447]: time="2024-09-04T17:57:29.784916277Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:57:29.785004 containerd[1447]: time="2024-09-04T17:57:29.784936850Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:57:29.785748 containerd[1447]: time="2024-09-04T17:57:29.785618808Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:57:29.785748 containerd[1447]: time="2024-09-04T17:57:29.785681739Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:57:29.785748 containerd[1447]: time="2024-09-04T17:57:29.785703113Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:57:29.788297 containerd[1447]: time="2024-09-04T17:57:29.787321900Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:57:29.788371 containerd[1447]: time="2024-09-04T17:57:29.787118912Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:57:29.807435 containerd[1447]: time="2024-09-04T17:57:29.806857192Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:57:29.807435 containerd[1447]: time="2024-09-04T17:57:29.807275659Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:57:29.807435 containerd[1447]: time="2024-09-04T17:57:29.807291289Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:57:29.810202 containerd[1447]: time="2024-09-04T17:57:29.808225410Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:57:29.860599 systemd[1]: Started cri-containerd-114891e100628d5b40957944c9693e9e67901c0568e2ba46d63d74072b20e8f4.scope - libcontainer container 114891e100628d5b40957944c9693e9e67901c0568e2ba46d63d74072b20e8f4. Sep 4 17:57:29.865617 systemd[1]: Started cri-containerd-7e4bf9889114756c415b193380eb90daa5dd168e0c906f71aefe64d8917d56bc.scope - libcontainer container 7e4bf9889114756c415b193380eb90daa5dd168e0c906f71aefe64d8917d56bc. Sep 4 17:57:29.894408 systemd[1]: Started cri-containerd-3babb72b77fafb57f90c5e36cb4af19de393627eda82e1f482aaa05ca96dedde.scope - libcontainer container 3babb72b77fafb57f90c5e36cb4af19de393627eda82e1f482aaa05ca96dedde. Sep 4 17:57:29.976958 containerd[1447]: time="2024-09-04T17:57:29.976830473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4054-1-0-2-9cde805234.novalocal,Uid:1c364d467e6fe544a8d0ca77c32769d8,Namespace:kube-system,Attempt:0,} returns sandbox id \"114891e100628d5b40957944c9693e9e67901c0568e2ba46d63d74072b20e8f4\"" Sep 4 17:57:29.978911 containerd[1447]: time="2024-09-04T17:57:29.978792586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4054-1-0-2-9cde805234.novalocal,Uid:2cf4ff88d2a4eafaf4bef239330843be,Namespace:kube-system,Attempt:0,} returns sandbox id \"7e4bf9889114756c415b193380eb90daa5dd168e0c906f71aefe64d8917d56bc\"" Sep 4 17:57:30.047315 containerd[1447]: time="2024-09-04T17:57:30.047118490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4054-1-0-2-9cde805234.novalocal,Uid:985f8e43dccfce6e9241e01532efeb0c,Namespace:kube-system,Attempt:0,} returns sandbox id \"3babb72b77fafb57f90c5e36cb4af19de393627eda82e1f482aaa05ca96dedde\"" Sep 4 17:57:30.081283 containerd[1447]: time="2024-09-04T17:57:30.081138074Z" level=info msg="CreateContainer within sandbox \"3babb72b77fafb57f90c5e36cb4af19de393627eda82e1f482aaa05ca96dedde\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 4 17:57:30.081680 containerd[1447]: time="2024-09-04T17:57:30.081486292Z" level=info msg="CreateContainer within sandbox \"114891e100628d5b40957944c9693e9e67901c0568e2ba46d63d74072b20e8f4\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 4 17:57:30.081948 containerd[1447]: time="2024-09-04T17:57:30.081838801Z" level=info msg="CreateContainer within sandbox \"7e4bf9889114756c415b193380eb90daa5dd168e0c906f71aefe64d8917d56bc\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 4 17:57:30.194411 kubelet[2287]: E0904 17:57:30.194099 2287 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.18:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.18:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4054-1-0-2-9cde805234.novalocal.17f21c3b4bfa4c83 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4054-1-0-2-9cde805234.novalocal,UID:ci-4054-1-0-2-9cde805234.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4054-1-0-2-9cde805234.novalocal,},FirstTimestamp:2024-09-04 17:57:28.239664259 +0000 UTC m=+1.150693320,LastTimestamp:2024-09-04 17:57:28.239664259 +0000 UTC m=+1.150693320,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4054-1-0-2-9cde805234.novalocal,}" Sep 4 17:57:30.310662 kubelet[2287]: E0904 17:57:30.310602 2287 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.18:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.18:6443: connect: connection refused Sep 4 17:57:30.586052 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount730683942.mount: Deactivated successfully. Sep 4 17:57:30.594959 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2243942463.mount: Deactivated successfully. Sep 4 17:57:30.600797 containerd[1447]: time="2024-09-04T17:57:30.600723949Z" level=info msg="CreateContainer within sandbox \"114891e100628d5b40957944c9693e9e67901c0568e2ba46d63d74072b20e8f4\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3f4b3c55cf68bc4b241ad685d80277338f793cc0a63ee0c24668e8952276bb5c\"" Sep 4 17:57:30.602742 containerd[1447]: time="2024-09-04T17:57:30.602695960Z" level=info msg="StartContainer for \"3f4b3c55cf68bc4b241ad685d80277338f793cc0a63ee0c24668e8952276bb5c\"" Sep 4 17:57:30.613015 containerd[1447]: time="2024-09-04T17:57:30.612754497Z" level=info msg="CreateContainer within sandbox \"3babb72b77fafb57f90c5e36cb4af19de393627eda82e1f482aaa05ca96dedde\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"35952ba4f6bb3d65d4133aa2fe7f7baabdb0ccdbda4188d121c01e975fc43c3b\"" Sep 4 17:57:30.614135 containerd[1447]: time="2024-09-04T17:57:30.613697371Z" level=info msg="StartContainer for \"35952ba4f6bb3d65d4133aa2fe7f7baabdb0ccdbda4188d121c01e975fc43c3b\"" Sep 4 17:57:30.620062 containerd[1447]: time="2024-09-04T17:57:30.619846881Z" level=info msg="CreateContainer within sandbox \"7e4bf9889114756c415b193380eb90daa5dd168e0c906f71aefe64d8917d56bc\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"739c5314df358c1cc6b1bcc011cba7bd1fe4cffef997f3a682ef9bf414b07e0b\"" Sep 4 17:57:30.622642 containerd[1447]: time="2024-09-04T17:57:30.622424876Z" level=info msg="StartContainer for \"739c5314df358c1cc6b1bcc011cba7bd1fe4cffef997f3a682ef9bf414b07e0b\"" Sep 4 17:57:30.671438 systemd[1]: Started cri-containerd-3f4b3c55cf68bc4b241ad685d80277338f793cc0a63ee0c24668e8952276bb5c.scope - libcontainer container 3f4b3c55cf68bc4b241ad685d80277338f793cc0a63ee0c24668e8952276bb5c. Sep 4 17:57:30.684448 systemd[1]: Started cri-containerd-35952ba4f6bb3d65d4133aa2fe7f7baabdb0ccdbda4188d121c01e975fc43c3b.scope - libcontainer container 35952ba4f6bb3d65d4133aa2fe7f7baabdb0ccdbda4188d121c01e975fc43c3b. Sep 4 17:57:30.691406 systemd[1]: Started cri-containerd-739c5314df358c1cc6b1bcc011cba7bd1fe4cffef997f3a682ef9bf414b07e0b.scope - libcontainer container 739c5314df358c1cc6b1bcc011cba7bd1fe4cffef997f3a682ef9bf414b07e0b. Sep 4 17:57:30.792081 containerd[1447]: time="2024-09-04T17:57:30.792033579Z" level=info msg="StartContainer for \"3f4b3c55cf68bc4b241ad685d80277338f793cc0a63ee0c24668e8952276bb5c\" returns successfully" Sep 4 17:57:30.793647 containerd[1447]: time="2024-09-04T17:57:30.793440498Z" level=info msg="StartContainer for \"739c5314df358c1cc6b1bcc011cba7bd1fe4cffef997f3a682ef9bf414b07e0b\" returns successfully" Sep 4 17:57:30.794211 containerd[1447]: time="2024-09-04T17:57:30.793792887Z" level=info msg="StartContainer for \"35952ba4f6bb3d65d4133aa2fe7f7baabdb0ccdbda4188d121c01e975fc43c3b\" returns successfully" Sep 4 17:57:30.869074 kubelet[2287]: W0904 17:57:30.868341 2287 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4054-1-0-2-9cde805234.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.18:6443: connect: connection refused Sep 4 17:57:30.869074 kubelet[2287]: E0904 17:57:30.868385 2287 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4054-1-0-2-9cde805234.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.18:6443: connect: connection refused Sep 4 17:57:30.931345 kubelet[2287]: W0904 17:57:30.931267 2287 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.18:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.18:6443: connect: connection refused Sep 4 17:57:30.931345 kubelet[2287]: E0904 17:57:30.931320 2287 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.18:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.18:6443: connect: connection refused Sep 4 17:57:31.087709 kubelet[2287]: W0904 17:57:31.087630 2287 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.18:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.18:6443: connect: connection refused Sep 4 17:57:31.087709 kubelet[2287]: E0904 17:57:31.087690 2287 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.18:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.18:6443: connect: connection refused Sep 4 17:57:31.385025 kubelet[2287]: I0904 17:57:31.384862 2287 kubelet_node_status.go:73] "Attempting to register node" node="ci-4054-1-0-2-9cde805234.novalocal" Sep 4 17:57:32.934759 kubelet[2287]: E0904 17:57:32.934717 2287 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4054-1-0-2-9cde805234.novalocal\" not found" node="ci-4054-1-0-2-9cde805234.novalocal" Sep 4 17:57:33.130072 kubelet[2287]: I0904 17:57:33.129998 2287 kubelet_node_status.go:76] "Successfully registered node" node="ci-4054-1-0-2-9cde805234.novalocal" Sep 4 17:57:33.157003 kubelet[2287]: E0904 17:57:33.156946 2287 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4054-1-0-2-9cde805234.novalocal\" not found" Sep 4 17:57:33.258109 kubelet[2287]: E0904 17:57:33.257517 2287 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4054-1-0-2-9cde805234.novalocal\" not found" Sep 4 17:57:33.358553 kubelet[2287]: E0904 17:57:33.358460 2287 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4054-1-0-2-9cde805234.novalocal\" not found" Sep 4 17:57:33.459082 kubelet[2287]: E0904 17:57:33.458992 2287 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4054-1-0-2-9cde805234.novalocal\" not found" Sep 4 17:57:33.560101 kubelet[2287]: E0904 17:57:33.560017 2287 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4054-1-0-2-9cde805234.novalocal\" not found" Sep 4 17:57:33.660461 kubelet[2287]: E0904 17:57:33.660379 2287 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4054-1-0-2-9cde805234.novalocal\" not found" Sep 4 17:57:33.761019 kubelet[2287]: E0904 17:57:33.760919 2287 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4054-1-0-2-9cde805234.novalocal\" not found" Sep 4 17:57:33.862692 kubelet[2287]: E0904 17:57:33.862469 2287 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4054-1-0-2-9cde805234.novalocal\" not found" Sep 4 17:57:33.963499 kubelet[2287]: E0904 17:57:33.963381 2287 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4054-1-0-2-9cde805234.novalocal\" not found" Sep 4 17:57:34.064298 kubelet[2287]: E0904 17:57:34.064172 2287 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4054-1-0-2-9cde805234.novalocal\" not found" Sep 4 17:57:34.165426 kubelet[2287]: E0904 17:57:34.165160 2287 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4054-1-0-2-9cde805234.novalocal\" not found" Sep 4 17:57:34.265533 kubelet[2287]: E0904 17:57:34.265452 2287 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4054-1-0-2-9cde805234.novalocal\" not found" Sep 4 17:57:34.366326 kubelet[2287]: E0904 17:57:34.366224 2287 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4054-1-0-2-9cde805234.novalocal\" not found" Sep 4 17:57:34.467077 kubelet[2287]: E0904 17:57:34.466846 2287 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4054-1-0-2-9cde805234.novalocal\" not found" Sep 4 17:57:34.567079 kubelet[2287]: E0904 17:57:34.567029 2287 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4054-1-0-2-9cde805234.novalocal\" not found" Sep 4 17:57:34.667654 kubelet[2287]: E0904 17:57:34.667585 2287 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4054-1-0-2-9cde805234.novalocal\" not found" Sep 4 17:57:34.768085 kubelet[2287]: E0904 17:57:34.767920 2287 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4054-1-0-2-9cde805234.novalocal\" not found" Sep 4 17:57:34.868841 kubelet[2287]: E0904 17:57:34.868734 2287 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4054-1-0-2-9cde805234.novalocal\" not found" Sep 4 17:57:34.969941 kubelet[2287]: E0904 17:57:34.969850 2287 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4054-1-0-2-9cde805234.novalocal\" not found" Sep 4 17:57:35.070275 kubelet[2287]: E0904 17:57:35.070124 2287 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4054-1-0-2-9cde805234.novalocal\" not found" Sep 4 17:57:35.241537 kubelet[2287]: I0904 17:57:35.241431 2287 apiserver.go:52] "Watching apiserver" Sep 4 17:57:35.264413 kubelet[2287]: I0904 17:57:35.264306 2287 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Sep 4 17:57:35.458425 systemd[1]: Reloading requested from client PID 2560 ('systemctl') (unit session-11.scope)... Sep 4 17:57:35.458465 systemd[1]: Reloading... Sep 4 17:57:35.575303 zram_generator::config[2603]: No configuration found. Sep 4 17:57:35.716296 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:57:35.826735 systemd[1]: Reloading finished in 367 ms. Sep 4 17:57:35.871161 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:57:35.872320 kubelet[2287]: I0904 17:57:35.871621 2287 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:57:35.883689 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 17:57:35.883905 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:57:35.883962 systemd[1]: kubelet.service: Consumed 1.412s CPU time, 108.9M memory peak, 0B memory swap peak. Sep 4 17:57:35.900200 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:57:36.085284 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:57:36.094713 (kubelet)[2661]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 17:57:36.722597 kubelet[2661]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:57:36.724291 kubelet[2661]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 17:57:36.724291 kubelet[2661]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:57:36.724291 kubelet[2661]: I0904 17:57:36.723147 2661 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 17:57:36.730203 kubelet[2661]: I0904 17:57:36.730115 2661 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Sep 4 17:57:36.730203 kubelet[2661]: I0904 17:57:36.730144 2661 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 17:57:36.730405 kubelet[2661]: I0904 17:57:36.730394 2661 server.go:927] "Client rotation is on, will bootstrap in background" Sep 4 17:57:36.732331 kubelet[2661]: I0904 17:57:36.732084 2661 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 4 17:57:36.734476 kubelet[2661]: I0904 17:57:36.733431 2661 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:57:36.746663 kubelet[2661]: I0904 17:57:36.746626 2661 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 17:57:36.747288 kubelet[2661]: I0904 17:57:36.747160 2661 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 17:57:36.747613 kubelet[2661]: I0904 17:57:36.747196 2661 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4054-1-0-2-9cde805234.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Sep 4 17:57:36.747923 kubelet[2661]: I0904 17:57:36.747768 2661 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 17:57:36.747923 kubelet[2661]: I0904 17:57:36.747787 2661 container_manager_linux.go:301] "Creating device plugin manager" Sep 4 17:57:36.747923 kubelet[2661]: I0904 17:57:36.747829 2661 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:57:36.748280 kubelet[2661]: I0904 17:57:36.748074 2661 kubelet.go:400] "Attempting to sync node with API server" Sep 4 17:57:36.748280 kubelet[2661]: I0904 17:57:36.748092 2661 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 17:57:36.748599 kubelet[2661]: I0904 17:57:36.748588 2661 kubelet.go:312] "Adding apiserver pod source" Sep 4 17:57:36.748716 kubelet[2661]: I0904 17:57:36.748704 2661 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 17:57:36.751809 kubelet[2661]: I0904 17:57:36.751730 2661 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.20" apiVersion="v1" Sep 4 17:57:36.754260 kubelet[2661]: I0904 17:57:36.754190 2661 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 17:57:36.756301 kubelet[2661]: I0904 17:57:36.756142 2661 server.go:1264] "Started kubelet" Sep 4 17:57:36.760969 kubelet[2661]: I0904 17:57:36.760949 2661 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 17:57:36.778281 kubelet[2661]: I0904 17:57:36.777075 2661 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 17:57:36.778281 kubelet[2661]: I0904 17:57:36.778087 2661 server.go:455] "Adding debug handlers to kubelet server" Sep 4 17:57:36.781910 kubelet[2661]: I0904 17:57:36.781855 2661 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 17:57:36.782502 kubelet[2661]: I0904 17:57:36.782487 2661 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 17:57:36.784048 kubelet[2661]: I0904 17:57:36.784026 2661 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 17:57:36.786338 kubelet[2661]: I0904 17:57:36.786323 2661 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 17:57:36.786420 kubelet[2661]: I0904 17:57:36.786411 2661 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 17:57:36.786486 kubelet[2661]: I0904 17:57:36.786478 2661 kubelet.go:2337] "Starting kubelet main sync loop" Sep 4 17:57:36.786594 kubelet[2661]: E0904 17:57:36.786563 2661 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 17:57:36.787164 kubelet[2661]: I0904 17:57:36.787140 2661 volume_manager.go:291] "Starting Kubelet Volume Manager" Sep 4 17:57:36.788726 kubelet[2661]: I0904 17:57:36.788710 2661 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Sep 4 17:57:36.789837 kubelet[2661]: I0904 17:57:36.789824 2661 reconciler.go:26] "Reconciler: start to sync state" Sep 4 17:57:36.805635 kubelet[2661]: I0904 17:57:36.805302 2661 factory.go:221] Registration of the systemd container factory successfully Sep 4 17:57:36.806341 kubelet[2661]: I0904 17:57:36.806298 2661 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 17:57:36.808330 kubelet[2661]: E0904 17:57:36.808300 2661 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 17:57:36.810219 kubelet[2661]: I0904 17:57:36.810177 2661 factory.go:221] Registration of the containerd container factory successfully Sep 4 17:57:36.873055 kubelet[2661]: I0904 17:57:36.872523 2661 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 17:57:36.873055 kubelet[2661]: I0904 17:57:36.872543 2661 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 17:57:36.873055 kubelet[2661]: I0904 17:57:36.872561 2661 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:57:36.873055 kubelet[2661]: I0904 17:57:36.872732 2661 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 4 17:57:36.873055 kubelet[2661]: I0904 17:57:36.872744 2661 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 4 17:57:36.873055 kubelet[2661]: I0904 17:57:36.872772 2661 policy_none.go:49] "None policy: Start" Sep 4 17:57:36.874451 kubelet[2661]: I0904 17:57:36.873460 2661 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 4 17:57:36.874451 kubelet[2661]: I0904 17:57:36.873487 2661 state_mem.go:35] "Initializing new in-memory state store" Sep 4 17:57:36.874529 kubelet[2661]: I0904 17:57:36.873660 2661 state_mem.go:75] "Updated machine memory state" Sep 4 17:57:36.882887 kubelet[2661]: I0904 17:57:36.882855 2661 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 17:57:36.883338 kubelet[2661]: I0904 17:57:36.883298 2661 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 4 17:57:36.883881 kubelet[2661]: I0904 17:57:36.883485 2661 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 17:57:36.887473 kubelet[2661]: I0904 17:57:36.887441 2661 topology_manager.go:215] "Topology Admit Handler" podUID="1c364d467e6fe544a8d0ca77c32769d8" podNamespace="kube-system" podName="kube-apiserver-ci-4054-1-0-2-9cde805234.novalocal" Sep 4 17:57:36.888844 kubelet[2661]: I0904 17:57:36.888779 2661 topology_manager.go:215] "Topology Admit Handler" podUID="985f8e43dccfce6e9241e01532efeb0c" podNamespace="kube-system" podName="kube-controller-manager-ci-4054-1-0-2-9cde805234.novalocal" Sep 4 17:57:36.888844 kubelet[2661]: I0904 17:57:36.888829 2661 topology_manager.go:215] "Topology Admit Handler" podUID="2cf4ff88d2a4eafaf4bef239330843be" podNamespace="kube-system" podName="kube-scheduler-ci-4054-1-0-2-9cde805234.novalocal" Sep 4 17:57:36.890140 kubelet[2661]: I0904 17:57:36.890097 2661 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/985f8e43dccfce6e9241e01532efeb0c-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4054-1-0-2-9cde805234.novalocal\" (UID: \"985f8e43dccfce6e9241e01532efeb0c\") " pod="kube-system/kube-controller-manager-ci-4054-1-0-2-9cde805234.novalocal" Sep 4 17:57:36.890203 kubelet[2661]: I0904 17:57:36.890145 2661 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1c364d467e6fe544a8d0ca77c32769d8-ca-certs\") pod \"kube-apiserver-ci-4054-1-0-2-9cde805234.novalocal\" (UID: \"1c364d467e6fe544a8d0ca77c32769d8\") " pod="kube-system/kube-apiserver-ci-4054-1-0-2-9cde805234.novalocal" Sep 4 17:57:36.890203 kubelet[2661]: I0904 17:57:36.890171 2661 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1c364d467e6fe544a8d0ca77c32769d8-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4054-1-0-2-9cde805234.novalocal\" (UID: \"1c364d467e6fe544a8d0ca77c32769d8\") " pod="kube-system/kube-apiserver-ci-4054-1-0-2-9cde805234.novalocal" Sep 4 17:57:36.890203 kubelet[2661]: I0904 17:57:36.890194 2661 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/985f8e43dccfce6e9241e01532efeb0c-flexvolume-dir\") pod \"kube-controller-manager-ci-4054-1-0-2-9cde805234.novalocal\" (UID: \"985f8e43dccfce6e9241e01532efeb0c\") " pod="kube-system/kube-controller-manager-ci-4054-1-0-2-9cde805234.novalocal" Sep 4 17:57:36.890319 kubelet[2661]: I0904 17:57:36.890215 2661 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/985f8e43dccfce6e9241e01532efeb0c-kubeconfig\") pod \"kube-controller-manager-ci-4054-1-0-2-9cde805234.novalocal\" (UID: \"985f8e43dccfce6e9241e01532efeb0c\") " pod="kube-system/kube-controller-manager-ci-4054-1-0-2-9cde805234.novalocal" Sep 4 17:57:36.890658 kubelet[2661]: I0904 17:57:36.890625 2661 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1c364d467e6fe544a8d0ca77c32769d8-k8s-certs\") pod \"kube-apiserver-ci-4054-1-0-2-9cde805234.novalocal\" (UID: \"1c364d467e6fe544a8d0ca77c32769d8\") " pod="kube-system/kube-apiserver-ci-4054-1-0-2-9cde805234.novalocal" Sep 4 17:57:36.890706 kubelet[2661]: I0904 17:57:36.890665 2661 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/985f8e43dccfce6e9241e01532efeb0c-ca-certs\") pod \"kube-controller-manager-ci-4054-1-0-2-9cde805234.novalocal\" (UID: \"985f8e43dccfce6e9241e01532efeb0c\") " pod="kube-system/kube-controller-manager-ci-4054-1-0-2-9cde805234.novalocal" Sep 4 17:57:36.890706 kubelet[2661]: I0904 17:57:36.890686 2661 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/985f8e43dccfce6e9241e01532efeb0c-k8s-certs\") pod \"kube-controller-manager-ci-4054-1-0-2-9cde805234.novalocal\" (UID: \"985f8e43dccfce6e9241e01532efeb0c\") " pod="kube-system/kube-controller-manager-ci-4054-1-0-2-9cde805234.novalocal" Sep 4 17:57:36.890762 kubelet[2661]: I0904 17:57:36.890711 2661 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2cf4ff88d2a4eafaf4bef239330843be-kubeconfig\") pod \"kube-scheduler-ci-4054-1-0-2-9cde805234.novalocal\" (UID: \"2cf4ff88d2a4eafaf4bef239330843be\") " pod="kube-system/kube-scheduler-ci-4054-1-0-2-9cde805234.novalocal" Sep 4 17:57:36.902733 kubelet[2661]: I0904 17:57:36.901486 2661 kubelet_node_status.go:73] "Attempting to register node" node="ci-4054-1-0-2-9cde805234.novalocal" Sep 4 17:57:36.910372 kubelet[2661]: W0904 17:57:36.910308 2661 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 4 17:57:36.911164 kubelet[2661]: W0904 17:57:36.910931 2661 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 4 17:57:36.916008 kubelet[2661]: W0904 17:57:36.915961 2661 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 4 17:57:36.924284 kubelet[2661]: I0904 17:57:36.921982 2661 kubelet_node_status.go:112] "Node was previously registered" node="ci-4054-1-0-2-9cde805234.novalocal" Sep 4 17:57:36.924284 kubelet[2661]: I0904 17:57:36.922659 2661 kubelet_node_status.go:76] "Successfully registered node" node="ci-4054-1-0-2-9cde805234.novalocal" Sep 4 17:57:37.752104 kubelet[2661]: I0904 17:57:37.751547 2661 apiserver.go:52] "Watching apiserver" Sep 4 17:57:37.790328 kubelet[2661]: I0904 17:57:37.790286 2661 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Sep 4 17:57:37.852162 kubelet[2661]: W0904 17:57:37.851160 2661 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 4 17:57:37.852162 kubelet[2661]: E0904 17:57:37.851270 2661 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4054-1-0-2-9cde805234.novalocal\" already exists" pod="kube-system/kube-apiserver-ci-4054-1-0-2-9cde805234.novalocal" Sep 4 17:57:37.886893 kubelet[2661]: I0904 17:57:37.886694 2661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4054-1-0-2-9cde805234.novalocal" podStartSLOduration=1.886654118 podStartE2EDuration="1.886654118s" podCreationTimestamp="2024-09-04 17:57:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:57:37.871369673 +0000 UTC m=+1.253279940" watchObservedRunningTime="2024-09-04 17:57:37.886654118 +0000 UTC m=+1.268564385" Sep 4 17:57:37.907280 kubelet[2661]: I0904 17:57:37.906606 2661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4054-1-0-2-9cde805234.novalocal" podStartSLOduration=1.906581378 podStartE2EDuration="1.906581378s" podCreationTimestamp="2024-09-04 17:57:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:57:37.887847638 +0000 UTC m=+1.269757915" watchObservedRunningTime="2024-09-04 17:57:37.906581378 +0000 UTC m=+1.288491645" Sep 4 17:57:37.922109 kubelet[2661]: I0904 17:57:37.922054 2661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4054-1-0-2-9cde805234.novalocal" podStartSLOduration=1.9220343450000001 podStartE2EDuration="1.922034345s" podCreationTimestamp="2024-09-04 17:57:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:57:37.90755678 +0000 UTC m=+1.289467057" watchObservedRunningTime="2024-09-04 17:57:37.922034345 +0000 UTC m=+1.303944612" Sep 4 17:57:42.568129 sudo[1711]: pam_unix(sudo:session): session closed for user root Sep 4 17:57:42.727710 sshd[1708]: pam_unix(sshd:session): session closed for user core Sep 4 17:57:42.736409 systemd-logind[1432]: Session 11 logged out. Waiting for processes to exit. Sep 4 17:57:42.737493 systemd[1]: sshd@8-172.24.4.18:22-172.24.4.1:52898.service: Deactivated successfully. Sep 4 17:57:42.742187 systemd[1]: session-11.scope: Deactivated successfully. Sep 4 17:57:42.742731 systemd[1]: session-11.scope: Consumed 7.748s CPU time, 139.4M memory peak, 0B memory swap peak. Sep 4 17:57:42.748430 systemd-logind[1432]: Removed session 11. Sep 4 17:57:50.743574 kubelet[2661]: I0904 17:57:50.743536 2661 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 4 17:57:50.744251 containerd[1447]: time="2024-09-04T17:57:50.744174724Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 4 17:57:50.744844 kubelet[2661]: I0904 17:57:50.744451 2661 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 4 17:57:51.187361 kubelet[2661]: I0904 17:57:51.186535 2661 topology_manager.go:215] "Topology Admit Handler" podUID="7267d6e8-c749-4006-b0cb-16689ed3bf3a" podNamespace="kube-system" podName="kube-proxy-hhk6p" Sep 4 17:57:51.207843 systemd[1]: Created slice kubepods-besteffort-pod7267d6e8_c749_4006_b0cb_16689ed3bf3a.slice - libcontainer container kubepods-besteffort-pod7267d6e8_c749_4006_b0cb_16689ed3bf3a.slice. Sep 4 17:57:51.288688 kubelet[2661]: I0904 17:57:51.288612 2661 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7267d6e8-c749-4006-b0cb-16689ed3bf3a-xtables-lock\") pod \"kube-proxy-hhk6p\" (UID: \"7267d6e8-c749-4006-b0cb-16689ed3bf3a\") " pod="kube-system/kube-proxy-hhk6p" Sep 4 17:57:51.288688 kubelet[2661]: I0904 17:57:51.288650 2661 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7267d6e8-c749-4006-b0cb-16689ed3bf3a-lib-modules\") pod \"kube-proxy-hhk6p\" (UID: \"7267d6e8-c749-4006-b0cb-16689ed3bf3a\") " pod="kube-system/kube-proxy-hhk6p" Sep 4 17:57:51.288688 kubelet[2661]: I0904 17:57:51.288674 2661 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7267d6e8-c749-4006-b0cb-16689ed3bf3a-kube-proxy\") pod \"kube-proxy-hhk6p\" (UID: \"7267d6e8-c749-4006-b0cb-16689ed3bf3a\") " pod="kube-system/kube-proxy-hhk6p" Sep 4 17:57:51.288688 kubelet[2661]: I0904 17:57:51.288693 2661 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7rt2\" (UniqueName: \"kubernetes.io/projected/7267d6e8-c749-4006-b0cb-16689ed3bf3a-kube-api-access-f7rt2\") pod \"kube-proxy-hhk6p\" (UID: \"7267d6e8-c749-4006-b0cb-16689ed3bf3a\") " pod="kube-system/kube-proxy-hhk6p" Sep 4 17:57:51.523587 containerd[1447]: time="2024-09-04T17:57:51.523130309Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hhk6p,Uid:7267d6e8-c749-4006-b0cb-16689ed3bf3a,Namespace:kube-system,Attempt:0,}" Sep 4 17:57:51.595841 containerd[1447]: time="2024-09-04T17:57:51.595416424Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:57:51.595841 containerd[1447]: time="2024-09-04T17:57:51.595532007Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:57:51.595841 containerd[1447]: time="2024-09-04T17:57:51.595570999Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:57:51.595841 containerd[1447]: time="2024-09-04T17:57:51.595699830Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:57:51.640450 systemd[1]: Started cri-containerd-08dfd107e220a966f93a81edf064bb6a8c36ccbb115ceb3fac5cff08d2ca95f8.scope - libcontainer container 08dfd107e220a966f93a81edf064bb6a8c36ccbb115ceb3fac5cff08d2ca95f8. Sep 4 17:57:51.673489 containerd[1447]: time="2024-09-04T17:57:51.673444665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hhk6p,Uid:7267d6e8-c749-4006-b0cb-16689ed3bf3a,Namespace:kube-system,Attempt:0,} returns sandbox id \"08dfd107e220a966f93a81edf064bb6a8c36ccbb115ceb3fac5cff08d2ca95f8\"" Sep 4 17:57:51.678140 containerd[1447]: time="2024-09-04T17:57:51.678103584Z" level=info msg="CreateContainer within sandbox \"08dfd107e220a966f93a81edf064bb6a8c36ccbb115ceb3fac5cff08d2ca95f8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 4 17:57:51.715904 containerd[1447]: time="2024-09-04T17:57:51.715855254Z" level=info msg="CreateContainer within sandbox \"08dfd107e220a966f93a81edf064bb6a8c36ccbb115ceb3fac5cff08d2ca95f8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2e599b9957aa24dcf4d93fc050951d7c5d6f63f4ccd635a3eeb8e2326d1781eb\"" Sep 4 17:57:51.717066 containerd[1447]: time="2024-09-04T17:57:51.717003043Z" level=info msg="StartContainer for \"2e599b9957aa24dcf4d93fc050951d7c5d6f63f4ccd635a3eeb8e2326d1781eb\"" Sep 4 17:57:51.746381 systemd[1]: Started cri-containerd-2e599b9957aa24dcf4d93fc050951d7c5d6f63f4ccd635a3eeb8e2326d1781eb.scope - libcontainer container 2e599b9957aa24dcf4d93fc050951d7c5d6f63f4ccd635a3eeb8e2326d1781eb. Sep 4 17:57:51.809948 containerd[1447]: time="2024-09-04T17:57:51.809876626Z" level=info msg="StartContainer for \"2e599b9957aa24dcf4d93fc050951d7c5d6f63f4ccd635a3eeb8e2326d1781eb\" returns successfully" Sep 4 17:57:51.823714 kubelet[2661]: I0904 17:57:51.822153 2661 topology_manager.go:215] "Topology Admit Handler" podUID="00ff9d46-bd0d-49e4-aeb7-e9c2700e88fc" podNamespace="tigera-operator" podName="tigera-operator-77f994b5bb-2bxmw" Sep 4 17:57:51.832765 systemd[1]: Created slice kubepods-besteffort-pod00ff9d46_bd0d_49e4_aeb7_e9c2700e88fc.slice - libcontainer container kubepods-besteffort-pod00ff9d46_bd0d_49e4_aeb7_e9c2700e88fc.slice. Sep 4 17:57:51.892710 kubelet[2661]: I0904 17:57:51.892085 2661 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/00ff9d46-bd0d-49e4-aeb7-e9c2700e88fc-var-lib-calico\") pod \"tigera-operator-77f994b5bb-2bxmw\" (UID: \"00ff9d46-bd0d-49e4-aeb7-e9c2700e88fc\") " pod="tigera-operator/tigera-operator-77f994b5bb-2bxmw" Sep 4 17:57:51.892710 kubelet[2661]: I0904 17:57:51.892155 2661 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zdzx\" (UniqueName: \"kubernetes.io/projected/00ff9d46-bd0d-49e4-aeb7-e9c2700e88fc-kube-api-access-2zdzx\") pod \"tigera-operator-77f994b5bb-2bxmw\" (UID: \"00ff9d46-bd0d-49e4-aeb7-e9c2700e88fc\") " pod="tigera-operator/tigera-operator-77f994b5bb-2bxmw" Sep 4 17:57:51.899717 kubelet[2661]: I0904 17:57:51.899665 2661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hhk6p" podStartSLOduration=0.899645843 podStartE2EDuration="899.645843ms" podCreationTimestamp="2024-09-04 17:57:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:57:51.898533237 +0000 UTC m=+15.280443514" watchObservedRunningTime="2024-09-04 17:57:51.899645843 +0000 UTC m=+15.281556110" Sep 4 17:57:52.139427 containerd[1447]: time="2024-09-04T17:57:52.139236516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-77f994b5bb-2bxmw,Uid:00ff9d46-bd0d-49e4-aeb7-e9c2700e88fc,Namespace:tigera-operator,Attempt:0,}" Sep 4 17:57:52.206610 containerd[1447]: time="2024-09-04T17:57:52.200064131Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:57:52.206610 containerd[1447]: time="2024-09-04T17:57:52.200207100Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:57:52.206610 containerd[1447]: time="2024-09-04T17:57:52.200319735Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:57:52.206610 containerd[1447]: time="2024-09-04T17:57:52.200886258Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:57:52.241447 systemd[1]: Started cri-containerd-2a33631714d15f704a6a39258aed44a4b1b6d3188079b5485ee32f8f91c412c0.scope - libcontainer container 2a33631714d15f704a6a39258aed44a4b1b6d3188079b5485ee32f8f91c412c0. Sep 4 17:57:52.287655 containerd[1447]: time="2024-09-04T17:57:52.287597545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-77f994b5bb-2bxmw,Uid:00ff9d46-bd0d-49e4-aeb7-e9c2700e88fc,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"2a33631714d15f704a6a39258aed44a4b1b6d3188079b5485ee32f8f91c412c0\"" Sep 4 17:57:52.302452 containerd[1447]: time="2024-09-04T17:57:52.302220529Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\"" Sep 4 17:57:52.429119 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1286735338.mount: Deactivated successfully. Sep 4 17:57:53.886457 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1783538105.mount: Deactivated successfully. Sep 4 17:57:54.642591 containerd[1447]: time="2024-09-04T17:57:54.642056522Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:57:54.645567 containerd[1447]: time="2024-09-04T17:57:54.644621906Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.3: active requests=0, bytes read=22136517" Sep 4 17:57:54.647558 containerd[1447]: time="2024-09-04T17:57:54.646795931Z" level=info msg="ImageCreate event name:\"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:57:54.648106 containerd[1447]: time="2024-09-04T17:57:54.648064589Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:57:54.648977 containerd[1447]: time="2024-09-04T17:57:54.648933549Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.3\" with image id \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\", repo tag \"quay.io/tigera/operator:v1.34.3\", repo digest \"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\", size \"22130728\" in 2.346650971s" Sep 4 17:57:54.648977 containerd[1447]: time="2024-09-04T17:57:54.648968531Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\" returns image reference \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\"" Sep 4 17:57:54.657043 containerd[1447]: time="2024-09-04T17:57:54.656804564Z" level=info msg="CreateContainer within sandbox \"2a33631714d15f704a6a39258aed44a4b1b6d3188079b5485ee32f8f91c412c0\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 4 17:57:54.675085 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3336998697.mount: Deactivated successfully. Sep 4 17:57:54.680170 containerd[1447]: time="2024-09-04T17:57:54.680132962Z" level=info msg="CreateContainer within sandbox \"2a33631714d15f704a6a39258aed44a4b1b6d3188079b5485ee32f8f91c412c0\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"5a488a919e1e1ed37d14845a38e3c06a75cc83b3d1d7168d11c7f39200fb5a17\"" Sep 4 17:57:54.682061 containerd[1447]: time="2024-09-04T17:57:54.682032206Z" level=info msg="StartContainer for \"5a488a919e1e1ed37d14845a38e3c06a75cc83b3d1d7168d11c7f39200fb5a17\"" Sep 4 17:57:54.718418 systemd[1]: Started cri-containerd-5a488a919e1e1ed37d14845a38e3c06a75cc83b3d1d7168d11c7f39200fb5a17.scope - libcontainer container 5a488a919e1e1ed37d14845a38e3c06a75cc83b3d1d7168d11c7f39200fb5a17. Sep 4 17:57:54.748990 containerd[1447]: time="2024-09-04T17:57:54.748917728Z" level=info msg="StartContainer for \"5a488a919e1e1ed37d14845a38e3c06a75cc83b3d1d7168d11c7f39200fb5a17\" returns successfully" Sep 4 17:57:54.904754 kubelet[2661]: I0904 17:57:54.904166 2661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-77f994b5bb-2bxmw" podStartSLOduration=1.5430347370000002 podStartE2EDuration="3.904134679s" podCreationTimestamp="2024-09-04 17:57:51 +0000 UTC" firstStartedPulling="2024-09-04 17:57:52.293327157 +0000 UTC m=+15.675237434" lastFinishedPulling="2024-09-04 17:57:54.654427099 +0000 UTC m=+18.036337376" observedRunningTime="2024-09-04 17:57:54.903196094 +0000 UTC m=+18.285106411" watchObservedRunningTime="2024-09-04 17:57:54.904134679 +0000 UTC m=+18.286044996" Sep 4 17:57:58.141083 kubelet[2661]: I0904 17:57:58.140728 2661 topology_manager.go:215] "Topology Admit Handler" podUID="89b838eb-72de-4ab5-8650-03532c0d9c36" podNamespace="calico-system" podName="calico-typha-7d87d885b6-v6c6z" Sep 4 17:57:58.164118 systemd[1]: Created slice kubepods-besteffort-pod89b838eb_72de_4ab5_8650_03532c0d9c36.slice - libcontainer container kubepods-besteffort-pod89b838eb_72de_4ab5_8650_03532c0d9c36.slice. Sep 4 17:57:58.233191 kubelet[2661]: I0904 17:57:58.233111 2661 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/89b838eb-72de-4ab5-8650-03532c0d9c36-typha-certs\") pod \"calico-typha-7d87d885b6-v6c6z\" (UID: \"89b838eb-72de-4ab5-8650-03532c0d9c36\") " pod="calico-system/calico-typha-7d87d885b6-v6c6z" Sep 4 17:57:58.233191 kubelet[2661]: I0904 17:57:58.233174 2661 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/89b838eb-72de-4ab5-8650-03532c0d9c36-tigera-ca-bundle\") pod \"calico-typha-7d87d885b6-v6c6z\" (UID: \"89b838eb-72de-4ab5-8650-03532c0d9c36\") " pod="calico-system/calico-typha-7d87d885b6-v6c6z" Sep 4 17:57:58.233191 kubelet[2661]: I0904 17:57:58.233200 2661 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpglr\" (UniqueName: \"kubernetes.io/projected/89b838eb-72de-4ab5-8650-03532c0d9c36-kube-api-access-wpglr\") pod \"calico-typha-7d87d885b6-v6c6z\" (UID: \"89b838eb-72de-4ab5-8650-03532c0d9c36\") " pod="calico-system/calico-typha-7d87d885b6-v6c6z" Sep 4 17:57:58.248060 kubelet[2661]: I0904 17:57:58.247840 2661 topology_manager.go:215] "Topology Admit Handler" podUID="7c554d1d-3957-4b9e-a9ae-d38270526fdb" podNamespace="calico-system" podName="calico-node-mwzgk" Sep 4 17:57:58.259679 systemd[1]: Created slice kubepods-besteffort-pod7c554d1d_3957_4b9e_a9ae_d38270526fdb.slice - libcontainer container kubepods-besteffort-pod7c554d1d_3957_4b9e_a9ae_d38270526fdb.slice. Sep 4 17:57:58.335757 kubelet[2661]: I0904 17:57:58.333420 2661 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c554d1d-3957-4b9e-a9ae-d38270526fdb-tigera-ca-bundle\") pod \"calico-node-mwzgk\" (UID: \"7c554d1d-3957-4b9e-a9ae-d38270526fdb\") " pod="calico-system/calico-node-mwzgk" Sep 4 17:57:58.335757 kubelet[2661]: I0904 17:57:58.333492 2661 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7c554d1d-3957-4b9e-a9ae-d38270526fdb-var-lib-calico\") pod \"calico-node-mwzgk\" (UID: \"7c554d1d-3957-4b9e-a9ae-d38270526fdb\") " pod="calico-system/calico-node-mwzgk" Sep 4 17:57:58.335757 kubelet[2661]: I0904 17:57:58.333522 2661 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7c554d1d-3957-4b9e-a9ae-d38270526fdb-lib-modules\") pod \"calico-node-mwzgk\" (UID: \"7c554d1d-3957-4b9e-a9ae-d38270526fdb\") " pod="calico-system/calico-node-mwzgk" Sep 4 17:57:58.335757 kubelet[2661]: I0904 17:57:58.333546 2661 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/7c554d1d-3957-4b9e-a9ae-d38270526fdb-cni-bin-dir\") pod \"calico-node-mwzgk\" (UID: \"7c554d1d-3957-4b9e-a9ae-d38270526fdb\") " pod="calico-system/calico-node-mwzgk" Sep 4 17:57:58.335757 kubelet[2661]: I0904 17:57:58.333581 2661 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/7c554d1d-3957-4b9e-a9ae-d38270526fdb-policysync\") pod \"calico-node-mwzgk\" (UID: \"7c554d1d-3957-4b9e-a9ae-d38270526fdb\") " pod="calico-system/calico-node-mwzgk" Sep 4 17:57:58.336010 kubelet[2661]: I0904 17:57:58.333603 2661 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/7c554d1d-3957-4b9e-a9ae-d38270526fdb-flexvol-driver-host\") pod \"calico-node-mwzgk\" (UID: \"7c554d1d-3957-4b9e-a9ae-d38270526fdb\") " pod="calico-system/calico-node-mwzgk" Sep 4 17:57:58.336010 kubelet[2661]: I0904 17:57:58.333627 2661 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/7c554d1d-3957-4b9e-a9ae-d38270526fdb-var-run-calico\") pod \"calico-node-mwzgk\" (UID: \"7c554d1d-3957-4b9e-a9ae-d38270526fdb\") " pod="calico-system/calico-node-mwzgk" Sep 4 17:57:58.336010 kubelet[2661]: I0904 17:57:58.333671 2661 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/7c554d1d-3957-4b9e-a9ae-d38270526fdb-cni-log-dir\") pod \"calico-node-mwzgk\" (UID: \"7c554d1d-3957-4b9e-a9ae-d38270526fdb\") " pod="calico-system/calico-node-mwzgk" Sep 4 17:57:58.336010 kubelet[2661]: I0904 17:57:58.333693 2661 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/7c554d1d-3957-4b9e-a9ae-d38270526fdb-cni-net-dir\") pod \"calico-node-mwzgk\" (UID: \"7c554d1d-3957-4b9e-a9ae-d38270526fdb\") " pod="calico-system/calico-node-mwzgk" Sep 4 17:57:58.336010 kubelet[2661]: I0904 17:57:58.333737 2661 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7c554d1d-3957-4b9e-a9ae-d38270526fdb-xtables-lock\") pod \"calico-node-mwzgk\" (UID: \"7c554d1d-3957-4b9e-a9ae-d38270526fdb\") " pod="calico-system/calico-node-mwzgk" Sep 4 17:57:58.336157 kubelet[2661]: I0904 17:57:58.333770 2661 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/7c554d1d-3957-4b9e-a9ae-d38270526fdb-node-certs\") pod \"calico-node-mwzgk\" (UID: \"7c554d1d-3957-4b9e-a9ae-d38270526fdb\") " pod="calico-system/calico-node-mwzgk" Sep 4 17:57:58.336157 kubelet[2661]: I0904 17:57:58.333810 2661 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kctss\" (UniqueName: \"kubernetes.io/projected/7c554d1d-3957-4b9e-a9ae-d38270526fdb-kube-api-access-kctss\") pod \"calico-node-mwzgk\" (UID: \"7c554d1d-3957-4b9e-a9ae-d38270526fdb\") " pod="calico-system/calico-node-mwzgk" Sep 4 17:57:58.384633 kubelet[2661]: I0904 17:57:58.384595 2661 topology_manager.go:215] "Topology Admit Handler" podUID="f0db1dfa-f33e-43bf-98b0-16a182e9f9f9" podNamespace="calico-system" podName="csi-node-driver-287hk" Sep 4 17:57:58.386288 kubelet[2661]: E0904 17:57:58.385666 2661 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-287hk" podUID="f0db1dfa-f33e-43bf-98b0-16a182e9f9f9" Sep 4 17:57:58.434745 kubelet[2661]: I0904 17:57:58.434609 2661 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f0db1dfa-f33e-43bf-98b0-16a182e9f9f9-socket-dir\") pod \"csi-node-driver-287hk\" (UID: \"f0db1dfa-f33e-43bf-98b0-16a182e9f9f9\") " pod="calico-system/csi-node-driver-287hk" Sep 4 17:57:58.434745 kubelet[2661]: I0904 17:57:58.434668 2661 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f0db1dfa-f33e-43bf-98b0-16a182e9f9f9-registration-dir\") pod \"csi-node-driver-287hk\" (UID: \"f0db1dfa-f33e-43bf-98b0-16a182e9f9f9\") " pod="calico-system/csi-node-driver-287hk" Sep 4 17:57:58.434745 kubelet[2661]: I0904 17:57:58.434725 2661 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/f0db1dfa-f33e-43bf-98b0-16a182e9f9f9-varrun\") pod \"csi-node-driver-287hk\" (UID: \"f0db1dfa-f33e-43bf-98b0-16a182e9f9f9\") " pod="calico-system/csi-node-driver-287hk" Sep 4 17:57:58.434995 kubelet[2661]: I0904 17:57:58.434818 2661 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f0db1dfa-f33e-43bf-98b0-16a182e9f9f9-kubelet-dir\") pod \"csi-node-driver-287hk\" (UID: \"f0db1dfa-f33e-43bf-98b0-16a182e9f9f9\") " pod="calico-system/csi-node-driver-287hk" Sep 4 17:57:58.434995 kubelet[2661]: I0904 17:57:58.434859 2661 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s54v8\" (UniqueName: \"kubernetes.io/projected/f0db1dfa-f33e-43bf-98b0-16a182e9f9f9-kube-api-access-s54v8\") pod \"csi-node-driver-287hk\" (UID: \"f0db1dfa-f33e-43bf-98b0-16a182e9f9f9\") " pod="calico-system/csi-node-driver-287hk" Sep 4 17:57:58.439006 kubelet[2661]: E0904 17:57:58.438960 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:57:58.439006 kubelet[2661]: W0904 17:57:58.438991 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:57:58.439161 kubelet[2661]: E0904 17:57:58.439060 2661 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:57:58.439368 kubelet[2661]: E0904 17:57:58.439346 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:57:58.439368 kubelet[2661]: W0904 17:57:58.439364 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:57:58.439441 kubelet[2661]: E0904 17:57:58.439377 2661 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:57:58.439614 kubelet[2661]: E0904 17:57:58.439578 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:57:58.439614 kubelet[2661]: W0904 17:57:58.439596 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:57:58.439614 kubelet[2661]: E0904 17:57:58.439607 2661 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:57:58.439916 kubelet[2661]: E0904 17:57:58.439889 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:57:58.439916 kubelet[2661]: W0904 17:57:58.439905 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:57:58.439996 kubelet[2661]: E0904 17:57:58.439921 2661 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:57:58.440218 kubelet[2661]: E0904 17:57:58.440189 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:57:58.440218 kubelet[2661]: W0904 17:57:58.440205 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:57:58.440557 kubelet[2661]: E0904 17:57:58.440533 2661 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:57:58.440779 kubelet[2661]: E0904 17:57:58.440760 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:57:58.440779 kubelet[2661]: W0904 17:57:58.440774 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:57:58.440864 kubelet[2661]: E0904 17:57:58.440799 2661 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:57:58.441006 kubelet[2661]: E0904 17:57:58.440987 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:57:58.441006 kubelet[2661]: W0904 17:57:58.441000 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:57:58.441181 kubelet[2661]: E0904 17:57:58.441144 2661 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:57:58.441379 kubelet[2661]: E0904 17:57:58.441357 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:57:58.441379 kubelet[2661]: W0904 17:57:58.441374 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:57:58.441504 kubelet[2661]: E0904 17:57:58.441392 2661 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:57:58.441940 kubelet[2661]: E0904 17:57:58.441917 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:57:58.441940 kubelet[2661]: W0904 17:57:58.441933 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:57:58.442103 kubelet[2661]: E0904 17:57:58.441958 2661 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:57:58.442222 kubelet[2661]: E0904 17:57:58.442179 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:57:58.442222 kubelet[2661]: W0904 17:57:58.442190 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:57:58.442222 kubelet[2661]: E0904 17:57:58.442199 2661 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:57:58.442665 kubelet[2661]: E0904 17:57:58.442515 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:57:58.442665 kubelet[2661]: W0904 17:57:58.442525 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:57:58.442665 kubelet[2661]: E0904 17:57:58.442538 2661 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:57:58.452442 kubelet[2661]: E0904 17:57:58.452361 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:57:58.452442 kubelet[2661]: W0904 17:57:58.452384 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:57:58.452442 kubelet[2661]: E0904 17:57:58.452403 2661 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:57:58.475882 containerd[1447]: time="2024-09-04T17:57:58.475703918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7d87d885b6-v6c6z,Uid:89b838eb-72de-4ab5-8650-03532c0d9c36,Namespace:calico-system,Attempt:0,}" Sep 4 17:57:58.485893 kubelet[2661]: E0904 17:57:58.483766 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:57:58.485893 kubelet[2661]: W0904 17:57:58.483832 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:57:58.485893 kubelet[2661]: E0904 17:57:58.483855 2661 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:57:58.531842 containerd[1447]: time="2024-09-04T17:57:58.531637607Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:57:58.531842 containerd[1447]: time="2024-09-04T17:57:58.531706798Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:57:58.531842 containerd[1447]: time="2024-09-04T17:57:58.531721589Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:57:58.532723 containerd[1447]: time="2024-09-04T17:57:58.532493475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:57:58.536810 kubelet[2661]: E0904 17:57:58.536777 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:57:58.536810 kubelet[2661]: W0904 17:57:58.536805 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:57:58.536810 kubelet[2661]: E0904 17:57:58.536826 2661 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:57:58.538141 kubelet[2661]: E0904 17:57:58.537925 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:57:58.538141 kubelet[2661]: W0904 17:57:58.537941 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:57:58.538141 kubelet[2661]: E0904 17:57:58.537986 2661 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:57:58.538953 kubelet[2661]: E0904 17:57:58.538914 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:57:58.538953 kubelet[2661]: W0904 17:57:58.538930 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:57:58.538953 kubelet[2661]: E0904 17:57:58.538948 2661 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:57:58.539914 kubelet[2661]: E0904 17:57:58.539892 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:57:58.539914 kubelet[2661]: W0904 17:57:58.539908 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:57:58.540306 kubelet[2661]: E0904 17:57:58.540018 2661 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:57:58.542628 kubelet[2661]: E0904 17:57:58.542473 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:57:58.542628 kubelet[2661]: W0904 17:57:58.542493 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:57:58.543025 kubelet[2661]: E0904 17:57:58.542776 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:57:58.543025 kubelet[2661]: W0904 17:57:58.542786 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:57:58.543025 kubelet[2661]: E0904 17:57:58.542876 2661 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:57:58.543025 kubelet[2661]: E0904 17:57:58.542914 2661 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:57:58.543776 kubelet[2661]: E0904 17:57:58.543581 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:57:58.543776 kubelet[2661]: W0904 17:57:58.543596 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:57:58.543776 kubelet[2661]: E0904 17:57:58.543676 2661 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:57:58.544119 kubelet[2661]: E0904 17:57:58.543872 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:57:58.544119 kubelet[2661]: W0904 17:57:58.543881 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:57:58.544119 kubelet[2661]: E0904 17:57:58.543914 2661 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:57:58.544808 kubelet[2661]: E0904 17:57:58.544458 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:57:58.544808 kubelet[2661]: W0904 17:57:58.544469 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:57:58.544808 kubelet[2661]: E0904 17:57:58.544635 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:57:58.544808 kubelet[2661]: W0904 17:57:58.544643 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:57:58.545092 kubelet[2661]: E0904 17:57:58.544959 2661 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:57:58.545092 kubelet[2661]: E0904 17:57:58.544984 2661 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:57:58.546892 kubelet[2661]: E0904 17:57:58.546084 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:57:58.546892 kubelet[2661]: W0904 17:57:58.546099 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:57:58.546892 kubelet[2661]: E0904 17:57:58.546847 2661 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:57:58.547642 kubelet[2661]: E0904 17:57:58.547387 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:57:58.547642 kubelet[2661]: W0904 17:57:58.547401 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:57:58.547642 kubelet[2661]: E0904 17:57:58.547477 2661 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:57:58.547642 kubelet[2661]: E0904 17:57:58.547589 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:57:58.547642 kubelet[2661]: W0904 17:57:58.547598 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:57:58.547802 kubelet[2661]: E0904 17:57:58.547776 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:57:58.547802 kubelet[2661]: W0904 17:57:58.547785 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:57:58.548347 kubelet[2661]: E0904 17:57:58.548120 2661 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:57:58.548347 kubelet[2661]: E0904 17:57:58.548161 2661 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:57:58.548347 kubelet[2661]: E0904 17:57:58.548200 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:57:58.548347 kubelet[2661]: W0904 17:57:58.548210 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:57:58.548472 kubelet[2661]: E0904 17:57:58.548416 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:57:58.548472 kubelet[2661]: W0904 17:57:58.548425 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:57:58.549588 kubelet[2661]: E0904 17:57:58.549272 2661 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:57:58.549681 kubelet[2661]: E0904 17:57:58.549656 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:57:58.549681 kubelet[2661]: W0904 17:57:58.549675 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:57:58.549901 kubelet[2661]: E0904 17:57:58.549695 2661 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:57:58.549901 kubelet[2661]: E0904 17:57:58.549769 2661 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:57:58.550077 kubelet[2661]: E0904 17:57:58.549952 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:57:58.550077 kubelet[2661]: W0904 17:57:58.549962 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:57:58.550077 kubelet[2661]: E0904 17:57:58.549971 2661 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:57:58.550519 kubelet[2661]: E0904 17:57:58.550168 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:57:58.550519 kubelet[2661]: W0904 17:57:58.550182 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:57:58.550519 kubelet[2661]: E0904 17:57:58.550192 2661 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:57:58.552118 kubelet[2661]: E0904 17:57:58.551973 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:57:58.552118 kubelet[2661]: W0904 17:57:58.551990 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:57:58.552118 kubelet[2661]: E0904 17:57:58.552011 2661 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:57:58.552696 kubelet[2661]: E0904 17:57:58.552557 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:57:58.552696 kubelet[2661]: W0904 17:57:58.552569 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:57:58.552696 kubelet[2661]: E0904 17:57:58.552586 2661 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:57:58.553999 kubelet[2661]: E0904 17:57:58.553203 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:57:58.553999 kubelet[2661]: W0904 17:57:58.553215 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:57:58.553999 kubelet[2661]: E0904 17:57:58.553888 2661 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:57:58.553999 kubelet[2661]: E0904 17:57:58.553970 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:57:58.553999 kubelet[2661]: W0904 17:57:58.553981 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:57:58.554298 kubelet[2661]: E0904 17:57:58.554282 2661 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:57:58.557342 kubelet[2661]: E0904 17:57:58.557128 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:57:58.557342 kubelet[2661]: W0904 17:57:58.557150 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:57:58.557342 kubelet[2661]: E0904 17:57:58.557175 2661 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:57:58.557869 kubelet[2661]: E0904 17:57:58.557588 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:57:58.557869 kubelet[2661]: W0904 17:57:58.557599 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:57:58.557869 kubelet[2661]: E0904 17:57:58.557610 2661 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:57:58.566648 containerd[1447]: time="2024-09-04T17:57:58.566583273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mwzgk,Uid:7c554d1d-3957-4b9e-a9ae-d38270526fdb,Namespace:calico-system,Attempt:0,}" Sep 4 17:57:58.574866 systemd[1]: Started cri-containerd-ab66845f87cd6223957e0e8653b2dc6381c545a61a21ed7e44ad0592a2d96f1f.scope - libcontainer container ab66845f87cd6223957e0e8653b2dc6381c545a61a21ed7e44ad0592a2d96f1f. Sep 4 17:57:58.587583 kubelet[2661]: E0904 17:57:58.587490 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:57:58.587583 kubelet[2661]: W0904 17:57:58.587508 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:57:58.587583 kubelet[2661]: E0904 17:57:58.587535 2661 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:57:58.623864 containerd[1447]: time="2024-09-04T17:57:58.623696711Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:57:58.624068 containerd[1447]: time="2024-09-04T17:57:58.624008899Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:57:58.624314 containerd[1447]: time="2024-09-04T17:57:58.624160508Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:57:58.625283 containerd[1447]: time="2024-09-04T17:57:58.624469820Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:57:58.662448 systemd[1]: Started cri-containerd-e99f4c1271225ce2509c1c5b1f28ba4a2d302fd48ea851171fd8d4cfe46c7b31.scope - libcontainer container e99f4c1271225ce2509c1c5b1f28ba4a2d302fd48ea851171fd8d4cfe46c7b31. Sep 4 17:57:58.686871 containerd[1447]: time="2024-09-04T17:57:58.686357157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7d87d885b6-v6c6z,Uid:89b838eb-72de-4ab5-8650-03532c0d9c36,Namespace:calico-system,Attempt:0,} returns sandbox id \"ab66845f87cd6223957e0e8653b2dc6381c545a61a21ed7e44ad0592a2d96f1f\"" Sep 4 17:57:58.700564 containerd[1447]: time="2024-09-04T17:57:58.700189526Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\"" Sep 4 17:57:58.721041 containerd[1447]: time="2024-09-04T17:57:58.720808063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mwzgk,Uid:7c554d1d-3957-4b9e-a9ae-d38270526fdb,Namespace:calico-system,Attempt:0,} returns sandbox id \"e99f4c1271225ce2509c1c5b1f28ba4a2d302fd48ea851171fd8d4cfe46c7b31\"" Sep 4 17:57:59.786967 kubelet[2661]: E0904 17:57:59.786903 2661 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-287hk" podUID="f0db1dfa-f33e-43bf-98b0-16a182e9f9f9" Sep 4 17:58:01.742113 containerd[1447]: time="2024-09-04T17:58:01.742021893Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:58:01.743944 containerd[1447]: time="2024-09-04T17:58:01.743862008Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.1: active requests=0, bytes read=29471335" Sep 4 17:58:01.745581 containerd[1447]: time="2024-09-04T17:58:01.745525657Z" level=info msg="ImageCreate event name:\"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:58:01.747975 containerd[1447]: time="2024-09-04T17:58:01.747954625Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:58:01.748613 containerd[1447]: time="2024-09-04T17:58:01.748569319Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.1\" with image id \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\", size \"30963728\" in 3.048340613s" Sep 4 17:58:01.748663 containerd[1447]: time="2024-09-04T17:58:01.748613338Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\" returns image reference \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\"" Sep 4 17:58:01.751065 containerd[1447]: time="2024-09-04T17:58:01.751023738Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\"" Sep 4 17:58:01.783622 containerd[1447]: time="2024-09-04T17:58:01.783506234Z" level=info msg="CreateContainer within sandbox \"ab66845f87cd6223957e0e8653b2dc6381c545a61a21ed7e44ad0592a2d96f1f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 4 17:58:01.787536 kubelet[2661]: E0904 17:58:01.787488 2661 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-287hk" podUID="f0db1dfa-f33e-43bf-98b0-16a182e9f9f9" Sep 4 17:58:01.815193 containerd[1447]: time="2024-09-04T17:58:01.815135962Z" level=info msg="CreateContainer within sandbox \"ab66845f87cd6223957e0e8653b2dc6381c545a61a21ed7e44ad0592a2d96f1f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"28bb5e54e36658cc7cdcd3635ce88c7bdbbdbda2b7e14831876534b47eacc01b\"" Sep 4 17:58:01.826503 containerd[1447]: time="2024-09-04T17:58:01.826160703Z" level=info msg="StartContainer for \"28bb5e54e36658cc7cdcd3635ce88c7bdbbdbda2b7e14831876534b47eacc01b\"" Sep 4 17:58:01.876772 systemd[1]: Started cri-containerd-28bb5e54e36658cc7cdcd3635ce88c7bdbbdbda2b7e14831876534b47eacc01b.scope - libcontainer container 28bb5e54e36658cc7cdcd3635ce88c7bdbbdbda2b7e14831876534b47eacc01b. Sep 4 17:58:01.952143 containerd[1447]: time="2024-09-04T17:58:01.952046075Z" level=info msg="StartContainer for \"28bb5e54e36658cc7cdcd3635ce88c7bdbbdbda2b7e14831876534b47eacc01b\" returns successfully" Sep 4 17:58:03.065072 kubelet[2661]: E0904 17:58:03.064907 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:58:03.065072 kubelet[2661]: W0904 17:58:03.064935 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:58:03.065072 kubelet[2661]: E0904 17:58:03.064959 2661 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:58:03.066066 kubelet[2661]: E0904 17:58:03.065952 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:58:03.066066 kubelet[2661]: W0904 17:58:03.065963 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:58:03.066066 kubelet[2661]: E0904 17:58:03.065975 2661 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:58:03.066596 kubelet[2661]: E0904 17:58:03.066200 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:58:03.066596 kubelet[2661]: W0904 17:58:03.066209 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:58:03.066596 kubelet[2661]: E0904 17:58:03.066219 2661 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:58:03.066596 kubelet[2661]: E0904 17:58:03.066447 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:58:03.066596 kubelet[2661]: W0904 17:58:03.066457 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:58:03.066596 kubelet[2661]: E0904 17:58:03.066467 2661 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:58:03.066829 kubelet[2661]: E0904 17:58:03.066818 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:58:03.066891 kubelet[2661]: W0904 17:58:03.066881 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:58:03.067062 kubelet[2661]: E0904 17:58:03.066977 2661 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:58:03.067358 kubelet[2661]: E0904 17:58:03.067230 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:58:03.067358 kubelet[2661]: W0904 17:58:03.067266 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:58:03.067358 kubelet[2661]: E0904 17:58:03.067276 2661 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:58:03.067534 kubelet[2661]: E0904 17:58:03.067523 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:58:03.067594 kubelet[2661]: W0904 17:58:03.067584 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:58:03.067655 kubelet[2661]: E0904 17:58:03.067645 2661 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:58:03.068012 kubelet[2661]: E0904 17:58:03.067901 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:58:03.068012 kubelet[2661]: W0904 17:58:03.067912 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:58:03.068012 kubelet[2661]: E0904 17:58:03.067922 2661 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:58:03.068183 kubelet[2661]: E0904 17:58:03.068171 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:58:03.068295 kubelet[2661]: W0904 17:58:03.068283 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:58:03.068367 kubelet[2661]: E0904 17:58:03.068356 2661 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:58:03.068687 kubelet[2661]: E0904 17:58:03.068593 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:58:03.068687 kubelet[2661]: W0904 17:58:03.068604 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:58:03.068687 kubelet[2661]: E0904 17:58:03.068614 2661 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:58:03.068857 kubelet[2661]: E0904 17:58:03.068846 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:58:03.068916 kubelet[2661]: W0904 17:58:03.068906 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:58:03.068978 kubelet[2661]: E0904 17:58:03.068968 2661 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:58:03.069272 kubelet[2661]: E0904 17:58:03.069230 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:58:03.069517 kubelet[2661]: W0904 17:58:03.069383 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:58:03.069517 kubelet[2661]: E0904 17:58:03.069411 2661 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:58:03.069664 kubelet[2661]: E0904 17:58:03.069653 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:58:03.069724 kubelet[2661]: W0904 17:58:03.069715 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:58:03.069865 kubelet[2661]: E0904 17:58:03.069777 2661 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:58:03.070013 kubelet[2661]: E0904 17:58:03.070002 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:58:03.070224 kubelet[2661]: W0904 17:58:03.070064 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:58:03.070224 kubelet[2661]: E0904 17:58:03.070081 2661 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:58:03.070437 kubelet[2661]: E0904 17:58:03.070424 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:58:03.070501 kubelet[2661]: W0904 17:58:03.070490 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:58:03.070595 kubelet[2661]: E0904 17:58:03.070557 2661 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:58:03.075843 kubelet[2661]: E0904 17:58:03.075814 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:58:03.075898 kubelet[2661]: W0904 17:58:03.075843 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:58:03.075898 kubelet[2661]: E0904 17:58:03.075865 2661 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:58:03.076185 kubelet[2661]: E0904 17:58:03.076157 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:58:03.076185 kubelet[2661]: W0904 17:58:03.076176 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:58:03.076285 kubelet[2661]: E0904 17:58:03.076192 2661 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:58:03.076526 kubelet[2661]: E0904 17:58:03.076503 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:58:03.076526 kubelet[2661]: W0904 17:58:03.076518 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:58:03.076618 kubelet[2661]: E0904 17:58:03.076544 2661 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:58:03.076786 kubelet[2661]: E0904 17:58:03.076768 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:58:03.076786 kubelet[2661]: W0904 17:58:03.076785 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:58:03.076867 kubelet[2661]: E0904 17:58:03.076819 2661 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:58:03.077151 kubelet[2661]: E0904 17:58:03.077094 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:58:03.077151 kubelet[2661]: W0904 17:58:03.077113 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:58:03.077151 kubelet[2661]: E0904 17:58:03.077132 2661 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:58:03.077466 kubelet[2661]: E0904 17:58:03.077442 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:58:03.077619 kubelet[2661]: W0904 17:58:03.077464 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:58:03.077619 kubelet[2661]: E0904 17:58:03.077555 2661 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:58:03.077761 kubelet[2661]: E0904 17:58:03.077741 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:58:03.077811 kubelet[2661]: W0904 17:58:03.077764 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:58:03.077915 kubelet[2661]: E0904 17:58:03.077894 2661 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:58:03.078295 kubelet[2661]: E0904 17:58:03.078273 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:58:03.078295 kubelet[2661]: W0904 17:58:03.078295 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:58:03.078432 kubelet[2661]: E0904 17:58:03.078408 2661 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:58:03.078792 kubelet[2661]: E0904 17:58:03.078767 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:58:03.078988 kubelet[2661]: W0904 17:58:03.078802 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:58:03.078988 kubelet[2661]: E0904 17:58:03.078853 2661 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:58:03.080606 kubelet[2661]: E0904 17:58:03.079454 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:58:03.080606 kubelet[2661]: W0904 17:58:03.079476 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:58:03.080606 kubelet[2661]: E0904 17:58:03.079495 2661 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:58:03.080606 kubelet[2661]: E0904 17:58:03.079714 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:58:03.080606 kubelet[2661]: W0904 17:58:03.079726 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:58:03.080606 kubelet[2661]: E0904 17:58:03.079830 2661 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:58:03.080606 kubelet[2661]: E0904 17:58:03.080029 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:58:03.080606 kubelet[2661]: W0904 17:58:03.080041 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:58:03.080606 kubelet[2661]: E0904 17:58:03.080139 2661 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:58:03.080606 kubelet[2661]: E0904 17:58:03.080309 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:58:03.081057 kubelet[2661]: W0904 17:58:03.080319 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:58:03.081057 kubelet[2661]: E0904 17:58:03.080335 2661 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:58:03.081057 kubelet[2661]: E0904 17:58:03.080583 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:58:03.081057 kubelet[2661]: W0904 17:58:03.080596 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:58:03.081057 kubelet[2661]: E0904 17:58:03.080631 2661 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:58:03.081218 kubelet[2661]: E0904 17:58:03.081180 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:58:03.081218 kubelet[2661]: W0904 17:58:03.081193 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:58:03.081218 kubelet[2661]: E0904 17:58:03.081211 2661 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:58:03.081671 kubelet[2661]: E0904 17:58:03.081564 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:58:03.081671 kubelet[2661]: W0904 17:58:03.081587 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:58:03.081671 kubelet[2661]: E0904 17:58:03.081608 2661 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:58:03.082194 kubelet[2661]: E0904 17:58:03.082102 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:58:03.082284 kubelet[2661]: W0904 17:58:03.082195 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:58:03.082557 kubelet[2661]: E0904 17:58:03.082447 2661 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:58:03.083155 kubelet[2661]: E0904 17:58:03.083125 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:58:03.083155 kubelet[2661]: W0904 17:58:03.083149 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:58:03.083231 kubelet[2661]: E0904 17:58:03.083165 2661 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:58:03.787378 kubelet[2661]: E0904 17:58:03.787193 2661 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-287hk" podUID="f0db1dfa-f33e-43bf-98b0-16a182e9f9f9" Sep 4 17:58:03.841389 containerd[1447]: time="2024-09-04T17:58:03.841212176Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:58:03.842942 containerd[1447]: time="2024-09-04T17:58:03.842774845Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1: active requests=0, bytes read=5141007" Sep 4 17:58:03.844123 containerd[1447]: time="2024-09-04T17:58:03.844078000Z" level=info msg="ImageCreate event name:\"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:58:03.847377 containerd[1447]: time="2024-09-04T17:58:03.847321285Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:58:03.848687 containerd[1447]: time="2024-09-04T17:58:03.848017106Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" with image id \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\", size \"6633368\" in 2.096958537s" Sep 4 17:58:03.848687 containerd[1447]: time="2024-09-04T17:58:03.848060143Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" returns image reference \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\"" Sep 4 17:58:03.851222 containerd[1447]: time="2024-09-04T17:58:03.851176681Z" level=info msg="CreateContainer within sandbox \"e99f4c1271225ce2509c1c5b1f28ba4a2d302fd48ea851171fd8d4cfe46c7b31\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 4 17:58:03.872643 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount999306748.mount: Deactivated successfully. Sep 4 17:58:03.881162 containerd[1447]: time="2024-09-04T17:58:03.881027531Z" level=info msg="CreateContainer within sandbox \"e99f4c1271225ce2509c1c5b1f28ba4a2d302fd48ea851171fd8d4cfe46c7b31\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"310d3d712a69d6980c192251acce8c19b9dec33cc80ad526078d70cec931902f\"" Sep 4 17:58:03.882347 containerd[1447]: time="2024-09-04T17:58:03.881871021Z" level=info msg="StartContainer for \"310d3d712a69d6980c192251acce8c19b9dec33cc80ad526078d70cec931902f\"" Sep 4 17:58:03.924444 systemd[1]: Started cri-containerd-310d3d712a69d6980c192251acce8c19b9dec33cc80ad526078d70cec931902f.scope - libcontainer container 310d3d712a69d6980c192251acce8c19b9dec33cc80ad526078d70cec931902f. Sep 4 17:58:03.968476 containerd[1447]: time="2024-09-04T17:58:03.968431160Z" level=info msg="StartContainer for \"310d3d712a69d6980c192251acce8c19b9dec33cc80ad526078d70cec931902f\" returns successfully" Sep 4 17:58:03.979880 kubelet[2661]: I0904 17:58:03.979803 2661 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 17:58:03.994941 systemd[1]: cri-containerd-310d3d712a69d6980c192251acce8c19b9dec33cc80ad526078d70cec931902f.scope: Deactivated successfully. Sep 4 17:58:04.009011 kubelet[2661]: I0904 17:58:04.008939 2661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7d87d885b6-v6c6z" podStartSLOduration=2.957460816 podStartE2EDuration="6.008922423s" podCreationTimestamp="2024-09-04 17:57:58 +0000 UTC" firstStartedPulling="2024-09-04 17:57:58.69880087 +0000 UTC m=+22.080711137" lastFinishedPulling="2024-09-04 17:58:01.750262477 +0000 UTC m=+25.132172744" observedRunningTime="2024-09-04 17:58:03.016328246 +0000 UTC m=+26.398238613" watchObservedRunningTime="2024-09-04 17:58:04.008922423 +0000 UTC m=+27.390832700" Sep 4 17:58:04.030778 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-310d3d712a69d6980c192251acce8c19b9dec33cc80ad526078d70cec931902f-rootfs.mount: Deactivated successfully. Sep 4 17:58:04.387160 containerd[1447]: time="2024-09-04T17:58:04.386848422Z" level=info msg="shim disconnected" id=310d3d712a69d6980c192251acce8c19b9dec33cc80ad526078d70cec931902f namespace=k8s.io Sep 4 17:58:04.387160 containerd[1447]: time="2024-09-04T17:58:04.387114978Z" level=warning msg="cleaning up after shim disconnected" id=310d3d712a69d6980c192251acce8c19b9dec33cc80ad526078d70cec931902f namespace=k8s.io Sep 4 17:58:04.387160 containerd[1447]: time="2024-09-04T17:58:04.387145960Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:58:04.992466 containerd[1447]: time="2024-09-04T17:58:04.992346264Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\"" Sep 4 17:58:05.624148 kubelet[2661]: I0904 17:58:05.622819 2661 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 17:58:05.787618 kubelet[2661]: E0904 17:58:05.787551 2661 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-287hk" podUID="f0db1dfa-f33e-43bf-98b0-16a182e9f9f9" Sep 4 17:58:07.787708 kubelet[2661]: E0904 17:58:07.787655 2661 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-287hk" podUID="f0db1dfa-f33e-43bf-98b0-16a182e9f9f9" Sep 4 17:58:09.788007 kubelet[2661]: E0904 17:58:09.787949 2661 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-287hk" podUID="f0db1dfa-f33e-43bf-98b0-16a182e9f9f9" Sep 4 17:58:11.290477 containerd[1447]: time="2024-09-04T17:58:11.290208626Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:58:11.292125 containerd[1447]: time="2024-09-04T17:58:11.292056545Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.1: active requests=0, bytes read=93083736" Sep 4 17:58:11.296222 containerd[1447]: time="2024-09-04T17:58:11.296171144Z" level=info msg="ImageCreate event name:\"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:58:11.339101 containerd[1447]: time="2024-09-04T17:58:11.338712450Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:58:11.341409 containerd[1447]: time="2024-09-04T17:58:11.341359727Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.1\" with image id \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\", size \"94576137\" in 6.348921127s" Sep 4 17:58:11.341409 containerd[1447]: time="2024-09-04T17:58:11.341401119Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\" returns image reference \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\"" Sep 4 17:58:11.361520 containerd[1447]: time="2024-09-04T17:58:11.361426509Z" level=info msg="CreateContainer within sandbox \"e99f4c1271225ce2509c1c5b1f28ba4a2d302fd48ea851171fd8d4cfe46c7b31\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 4 17:58:11.385647 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4176550792.mount: Deactivated successfully. Sep 4 17:58:11.394308 containerd[1447]: time="2024-09-04T17:58:11.394232075Z" level=info msg="CreateContainer within sandbox \"e99f4c1271225ce2509c1c5b1f28ba4a2d302fd48ea851171fd8d4cfe46c7b31\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c589c584d1190580020e769e9ec4e0f6a87b9bd0d8c346975f7ec19802600445\"" Sep 4 17:58:11.395630 containerd[1447]: time="2024-09-04T17:58:11.395306167Z" level=info msg="StartContainer for \"c589c584d1190580020e769e9ec4e0f6a87b9bd0d8c346975f7ec19802600445\"" Sep 4 17:58:11.506636 systemd[1]: Started cri-containerd-c589c584d1190580020e769e9ec4e0f6a87b9bd0d8c346975f7ec19802600445.scope - libcontainer container c589c584d1190580020e769e9ec4e0f6a87b9bd0d8c346975f7ec19802600445. Sep 4 17:58:11.561151 containerd[1447]: time="2024-09-04T17:58:11.561013832Z" level=info msg="StartContainer for \"c589c584d1190580020e769e9ec4e0f6a87b9bd0d8c346975f7ec19802600445\" returns successfully" Sep 4 17:58:11.787162 kubelet[2661]: E0904 17:58:11.787018 2661 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-287hk" podUID="f0db1dfa-f33e-43bf-98b0-16a182e9f9f9" Sep 4 17:58:13.054296 systemd[1]: cri-containerd-c589c584d1190580020e769e9ec4e0f6a87b9bd0d8c346975f7ec19802600445.scope: Deactivated successfully. Sep 4 17:58:13.112523 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c589c584d1190580020e769e9ec4e0f6a87b9bd0d8c346975f7ec19802600445-rootfs.mount: Deactivated successfully. Sep 4 17:58:13.269377 kubelet[2661]: I0904 17:58:13.269310 2661 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Sep 4 17:58:13.429988 containerd[1447]: time="2024-09-04T17:58:13.429574368Z" level=info msg="shim disconnected" id=c589c584d1190580020e769e9ec4e0f6a87b9bd0d8c346975f7ec19802600445 namespace=k8s.io Sep 4 17:58:13.429988 containerd[1447]: time="2024-09-04T17:58:13.429891886Z" level=warning msg="cleaning up after shim disconnected" id=c589c584d1190580020e769e9ec4e0f6a87b9bd0d8c346975f7ec19802600445 namespace=k8s.io Sep 4 17:58:13.432352 containerd[1447]: time="2024-09-04T17:58:13.430131230Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:58:13.457643 kubelet[2661]: I0904 17:58:13.457559 2661 topology_manager.go:215] "Topology Admit Handler" podUID="6007fafb-62cf-4f68-b42a-9d6500fa9f55" podNamespace="kube-system" podName="coredns-7db6d8ff4d-bkmjk" Sep 4 17:58:13.472688 kubelet[2661]: I0904 17:58:13.470307 2661 topology_manager.go:215] "Topology Admit Handler" podUID="d5415bb8-100d-4692-b66d-5875d35f5aef" podNamespace="calico-system" podName="calico-kube-controllers-798cd9c48-zp2dz" Sep 4 17:58:13.494522 kubelet[2661]: I0904 17:58:13.493824 2661 topology_manager.go:215] "Topology Admit Handler" podUID="57139a4b-d4db-400d-8527-bd5a87379b62" podNamespace="kube-system" podName="coredns-7db6d8ff4d-zzgpk" Sep 4 17:58:13.497482 systemd[1]: Created slice kubepods-burstable-pod6007fafb_62cf_4f68_b42a_9d6500fa9f55.slice - libcontainer container kubepods-burstable-pod6007fafb_62cf_4f68_b42a_9d6500fa9f55.slice. Sep 4 17:58:13.507967 systemd[1]: Created slice kubepods-besteffort-podd5415bb8_100d_4692_b66d_5875d35f5aef.slice - libcontainer container kubepods-besteffort-podd5415bb8_100d_4692_b66d_5875d35f5aef.slice. Sep 4 17:58:13.521899 systemd[1]: Created slice kubepods-burstable-pod57139a4b_d4db_400d_8527_bd5a87379b62.slice - libcontainer container kubepods-burstable-pod57139a4b_d4db_400d_8527_bd5a87379b62.slice. Sep 4 17:58:13.557573 kubelet[2661]: I0904 17:58:13.557521 2661 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6007fafb-62cf-4f68-b42a-9d6500fa9f55-config-volume\") pod \"coredns-7db6d8ff4d-bkmjk\" (UID: \"6007fafb-62cf-4f68-b42a-9d6500fa9f55\") " pod="kube-system/coredns-7db6d8ff4d-bkmjk" Sep 4 17:58:13.557573 kubelet[2661]: I0904 17:58:13.557588 2661 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kt9sl\" (UniqueName: \"kubernetes.io/projected/6007fafb-62cf-4f68-b42a-9d6500fa9f55-kube-api-access-kt9sl\") pod \"coredns-7db6d8ff4d-bkmjk\" (UID: \"6007fafb-62cf-4f68-b42a-9d6500fa9f55\") " pod="kube-system/coredns-7db6d8ff4d-bkmjk" Sep 4 17:58:13.660332 kubelet[2661]: I0904 17:58:13.658474 2661 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/57139a4b-d4db-400d-8527-bd5a87379b62-config-volume\") pod \"coredns-7db6d8ff4d-zzgpk\" (UID: \"57139a4b-d4db-400d-8527-bd5a87379b62\") " pod="kube-system/coredns-7db6d8ff4d-zzgpk" Sep 4 17:58:13.660332 kubelet[2661]: I0904 17:58:13.658575 2661 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rxlm\" (UniqueName: \"kubernetes.io/projected/57139a4b-d4db-400d-8527-bd5a87379b62-kube-api-access-8rxlm\") pod \"coredns-7db6d8ff4d-zzgpk\" (UID: \"57139a4b-d4db-400d-8527-bd5a87379b62\") " pod="kube-system/coredns-7db6d8ff4d-zzgpk" Sep 4 17:58:13.660332 kubelet[2661]: I0904 17:58:13.658632 2661 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d5415bb8-100d-4692-b66d-5875d35f5aef-tigera-ca-bundle\") pod \"calico-kube-controllers-798cd9c48-zp2dz\" (UID: \"d5415bb8-100d-4692-b66d-5875d35f5aef\") " pod="calico-system/calico-kube-controllers-798cd9c48-zp2dz" Sep 4 17:58:13.660332 kubelet[2661]: I0904 17:58:13.658759 2661 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kj2hn\" (UniqueName: \"kubernetes.io/projected/d5415bb8-100d-4692-b66d-5875d35f5aef-kube-api-access-kj2hn\") pod \"calico-kube-controllers-798cd9c48-zp2dz\" (UID: \"d5415bb8-100d-4692-b66d-5875d35f5aef\") " pod="calico-system/calico-kube-controllers-798cd9c48-zp2dz" Sep 4 17:58:13.807768 containerd[1447]: time="2024-09-04T17:58:13.807631597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bkmjk,Uid:6007fafb-62cf-4f68-b42a-9d6500fa9f55,Namespace:kube-system,Attempt:0,}" Sep 4 17:58:13.816553 containerd[1447]: time="2024-09-04T17:58:13.816301767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-798cd9c48-zp2dz,Uid:d5415bb8-100d-4692-b66d-5875d35f5aef,Namespace:calico-system,Attempt:0,}" Sep 4 17:58:13.828265 containerd[1447]: time="2024-09-04T17:58:13.827950805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zzgpk,Uid:57139a4b-d4db-400d-8527-bd5a87379b62,Namespace:kube-system,Attempt:0,}" Sep 4 17:58:13.831114 systemd[1]: Created slice kubepods-besteffort-podf0db1dfa_f33e_43bf_98b0_16a182e9f9f9.slice - libcontainer container kubepods-besteffort-podf0db1dfa_f33e_43bf_98b0_16a182e9f9f9.slice. Sep 4 17:58:13.843325 containerd[1447]: time="2024-09-04T17:58:13.843261604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-287hk,Uid:f0db1dfa-f33e-43bf-98b0-16a182e9f9f9,Namespace:calico-system,Attempt:0,}" Sep 4 17:58:14.046477 containerd[1447]: time="2024-09-04T17:58:14.046042783Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\"" Sep 4 17:58:14.445179 containerd[1447]: time="2024-09-04T17:58:14.444997219Z" level=error msg="Failed to destroy network for sandbox \"cce266914840a1d211b368eb3853f7adc3e200e15098b220fc1d5da1284aec31\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:58:14.448263 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cce266914840a1d211b368eb3853f7adc3e200e15098b220fc1d5da1284aec31-shm.mount: Deactivated successfully. Sep 4 17:58:14.460279 containerd[1447]: time="2024-09-04T17:58:14.456990291Z" level=error msg="Failed to destroy network for sandbox \"5907cbb1d3f8d792b51ba195422e07350eeac01d469c6556ac788d6b27c91ae4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:58:14.460832 containerd[1447]: time="2024-09-04T17:58:14.460437478Z" level=error msg="encountered an error cleaning up failed sandbox \"cce266914840a1d211b368eb3853f7adc3e200e15098b220fc1d5da1284aec31\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:58:14.460832 containerd[1447]: time="2024-09-04T17:58:14.460521625Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-798cd9c48-zp2dz,Uid:d5415bb8-100d-4692-b66d-5875d35f5aef,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cce266914840a1d211b368eb3853f7adc3e200e15098b220fc1d5da1284aec31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:58:14.460832 containerd[1447]: time="2024-09-04T17:58:14.460644088Z" level=error msg="encountered an error cleaning up failed sandbox \"5907cbb1d3f8d792b51ba195422e07350eeac01d469c6556ac788d6b27c91ae4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:58:14.460832 containerd[1447]: time="2024-09-04T17:58:14.460710578Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bkmjk,Uid:6007fafb-62cf-4f68-b42a-9d6500fa9f55,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5907cbb1d3f8d792b51ba195422e07350eeac01d469c6556ac788d6b27c91ae4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:58:14.464652 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5907cbb1d3f8d792b51ba195422e07350eeac01d469c6556ac788d6b27c91ae4-shm.mount: Deactivated successfully. Sep 4 17:58:14.469856 containerd[1447]: time="2024-09-04T17:58:14.468970526Z" level=error msg="Failed to destroy network for sandbox \"8956f21a2a02624f57561bba949223b27ce2149177fc4dcf847ea01dd454ea14\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:58:14.470125 kubelet[2661]: E0904 17:58:14.470085 2661 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5907cbb1d3f8d792b51ba195422e07350eeac01d469c6556ac788d6b27c91ae4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:58:14.470502 kubelet[2661]: E0904 17:58:14.470480 2661 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5907cbb1d3f8d792b51ba195422e07350eeac01d469c6556ac788d6b27c91ae4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-bkmjk" Sep 4 17:58:14.470603 kubelet[2661]: E0904 17:58:14.470582 2661 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5907cbb1d3f8d792b51ba195422e07350eeac01d469c6556ac788d6b27c91ae4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-bkmjk" Sep 4 17:58:14.470754 kubelet[2661]: E0904 17:58:14.470108 2661 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cce266914840a1d211b368eb3853f7adc3e200e15098b220fc1d5da1284aec31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:58:14.470891 kubelet[2661]: E0904 17:58:14.470872 2661 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cce266914840a1d211b368eb3853f7adc3e200e15098b220fc1d5da1284aec31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-798cd9c48-zp2dz" Sep 4 17:58:14.471143 kubelet[2661]: E0904 17:58:14.471123 2661 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cce266914840a1d211b368eb3853f7adc3e200e15098b220fc1d5da1284aec31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-798cd9c48-zp2dz" Sep 4 17:58:14.471332 kubelet[2661]: E0904 17:58:14.471295 2661 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-798cd9c48-zp2dz_calico-system(d5415bb8-100d-4692-b66d-5875d35f5aef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-798cd9c48-zp2dz_calico-system(d5415bb8-100d-4692-b66d-5875d35f5aef)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cce266914840a1d211b368eb3853f7adc3e200e15098b220fc1d5da1284aec31\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-798cd9c48-zp2dz" podUID="d5415bb8-100d-4692-b66d-5875d35f5aef" Sep 4 17:58:14.471506 containerd[1447]: time="2024-09-04T17:58:14.471389721Z" level=error msg="encountered an error cleaning up failed sandbox \"8956f21a2a02624f57561bba949223b27ce2149177fc4dcf847ea01dd454ea14\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:58:14.472708 kubelet[2661]: E0904 17:58:14.470730 2661 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-bkmjk_kube-system(6007fafb-62cf-4f68-b42a-9d6500fa9f55)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-bkmjk_kube-system(6007fafb-62cf-4f68-b42a-9d6500fa9f55)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5907cbb1d3f8d792b51ba195422e07350eeac01d469c6556ac788d6b27c91ae4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-bkmjk" podUID="6007fafb-62cf-4f68-b42a-9d6500fa9f55" Sep 4 17:58:14.473219 containerd[1447]: time="2024-09-04T17:58:14.473162568Z" level=error msg="Failed to destroy network for sandbox \"ec6a9f72910d939ef57927dffa061b45449a7cc44cdd2f9a195d7acd6a58fe26\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:58:14.473772 containerd[1447]: time="2024-09-04T17:58:14.473748076Z" level=error msg="encountered an error cleaning up failed sandbox \"ec6a9f72910d939ef57927dffa061b45449a7cc44cdd2f9a195d7acd6a58fe26\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:58:14.473924 containerd[1447]: time="2024-09-04T17:58:14.473899536Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zzgpk,Uid:57139a4b-d4db-400d-8527-bd5a87379b62,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8956f21a2a02624f57561bba949223b27ce2149177fc4dcf847ea01dd454ea14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:58:14.474105 containerd[1447]: time="2024-09-04T17:58:14.473988732Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-287hk,Uid:f0db1dfa-f33e-43bf-98b0-16a182e9f9f9,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ec6a9f72910d939ef57927dffa061b45449a7cc44cdd2f9a195d7acd6a58fe26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:58:14.474419 kubelet[2661]: E0904 17:58:14.474391 2661 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec6a9f72910d939ef57927dffa061b45449a7cc44cdd2f9a195d7acd6a58fe26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:58:14.474736 kubelet[2661]: E0904 17:58:14.474717 2661 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec6a9f72910d939ef57927dffa061b45449a7cc44cdd2f9a195d7acd6a58fe26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-287hk" Sep 4 17:58:14.474990 kubelet[2661]: E0904 17:58:14.474970 2661 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec6a9f72910d939ef57927dffa061b45449a7cc44cdd2f9a195d7acd6a58fe26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-287hk" Sep 4 17:58:14.475123 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8956f21a2a02624f57561bba949223b27ce2149177fc4dcf847ea01dd454ea14-shm.mount: Deactivated successfully. Sep 4 17:58:14.475416 kubelet[2661]: E0904 17:58:14.475089 2661 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-287hk_calico-system(f0db1dfa-f33e-43bf-98b0-16a182e9f9f9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-287hk_calico-system(f0db1dfa-f33e-43bf-98b0-16a182e9f9f9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ec6a9f72910d939ef57927dffa061b45449a7cc44cdd2f9a195d7acd6a58fe26\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-287hk" podUID="f0db1dfa-f33e-43bf-98b0-16a182e9f9f9" Sep 4 17:58:14.475416 kubelet[2661]: E0904 17:58:14.474643 2661 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8956f21a2a02624f57561bba949223b27ce2149177fc4dcf847ea01dd454ea14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:58:14.475949 kubelet[2661]: E0904 17:58:14.475618 2661 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8956f21a2a02624f57561bba949223b27ce2149177fc4dcf847ea01dd454ea14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-zzgpk" Sep 4 17:58:14.476150 kubelet[2661]: E0904 17:58:14.476030 2661 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8956f21a2a02624f57561bba949223b27ce2149177fc4dcf847ea01dd454ea14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-zzgpk" Sep 4 17:58:14.476356 kubelet[2661]: E0904 17:58:14.476115 2661 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-zzgpk_kube-system(57139a4b-d4db-400d-8527-bd5a87379b62)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-zzgpk_kube-system(57139a4b-d4db-400d-8527-bd5a87379b62)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8956f21a2a02624f57561bba949223b27ce2149177fc4dcf847ea01dd454ea14\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-zzgpk" podUID="57139a4b-d4db-400d-8527-bd5a87379b62" Sep 4 17:58:14.482570 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ec6a9f72910d939ef57927dffa061b45449a7cc44cdd2f9a195d7acd6a58fe26-shm.mount: Deactivated successfully. Sep 4 17:58:15.052347 kubelet[2661]: I0904 17:58:15.048478 2661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec6a9f72910d939ef57927dffa061b45449a7cc44cdd2f9a195d7acd6a58fe26" Sep 4 17:58:15.055738 containerd[1447]: time="2024-09-04T17:58:15.055596840Z" level=info msg="StopPodSandbox for \"ec6a9f72910d939ef57927dffa061b45449a7cc44cdd2f9a195d7acd6a58fe26\"" Sep 4 17:58:15.064098 containerd[1447]: time="2024-09-04T17:58:15.063962388Z" level=info msg="Ensure that sandbox ec6a9f72910d939ef57927dffa061b45449a7cc44cdd2f9a195d7acd6a58fe26 in task-service has been cleanup successfully" Sep 4 17:58:15.069584 kubelet[2661]: I0904 17:58:15.069131 2661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8956f21a2a02624f57561bba949223b27ce2149177fc4dcf847ea01dd454ea14" Sep 4 17:58:15.072290 containerd[1447]: time="2024-09-04T17:58:15.071129718Z" level=info msg="StopPodSandbox for \"8956f21a2a02624f57561bba949223b27ce2149177fc4dcf847ea01dd454ea14\"" Sep 4 17:58:15.075034 containerd[1447]: time="2024-09-04T17:58:15.073599598Z" level=info msg="Ensure that sandbox 8956f21a2a02624f57561bba949223b27ce2149177fc4dcf847ea01dd454ea14 in task-service has been cleanup successfully" Sep 4 17:58:15.077882 kubelet[2661]: I0904 17:58:15.076839 2661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cce266914840a1d211b368eb3853f7adc3e200e15098b220fc1d5da1284aec31" Sep 4 17:58:15.080667 containerd[1447]: time="2024-09-04T17:58:15.080592894Z" level=info msg="StopPodSandbox for \"cce266914840a1d211b368eb3853f7adc3e200e15098b220fc1d5da1284aec31\"" Sep 4 17:58:15.083322 containerd[1447]: time="2024-09-04T17:58:15.080937255Z" level=info msg="Ensure that sandbox cce266914840a1d211b368eb3853f7adc3e200e15098b220fc1d5da1284aec31 in task-service has been cleanup successfully" Sep 4 17:58:15.083675 kubelet[2661]: I0904 17:58:15.083585 2661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5907cbb1d3f8d792b51ba195422e07350eeac01d469c6556ac788d6b27c91ae4" Sep 4 17:58:15.085097 containerd[1447]: time="2024-09-04T17:58:15.084975002Z" level=info msg="StopPodSandbox for \"5907cbb1d3f8d792b51ba195422e07350eeac01d469c6556ac788d6b27c91ae4\"" Sep 4 17:58:15.090309 containerd[1447]: time="2024-09-04T17:58:15.089046276Z" level=info msg="Ensure that sandbox 5907cbb1d3f8d792b51ba195422e07350eeac01d469c6556ac788d6b27c91ae4 in task-service has been cleanup successfully" Sep 4 17:58:15.174545 containerd[1447]: time="2024-09-04T17:58:15.174467522Z" level=error msg="StopPodSandbox for \"ec6a9f72910d939ef57927dffa061b45449a7cc44cdd2f9a195d7acd6a58fe26\" failed" error="failed to destroy network for sandbox \"ec6a9f72910d939ef57927dffa061b45449a7cc44cdd2f9a195d7acd6a58fe26\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:58:15.174974 kubelet[2661]: E0904 17:58:15.174919 2661 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ec6a9f72910d939ef57927dffa061b45449a7cc44cdd2f9a195d7acd6a58fe26\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ec6a9f72910d939ef57927dffa061b45449a7cc44cdd2f9a195d7acd6a58fe26" Sep 4 17:58:15.175387 kubelet[2661]: E0904 17:58:15.175096 2661 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ec6a9f72910d939ef57927dffa061b45449a7cc44cdd2f9a195d7acd6a58fe26"} Sep 4 17:58:15.175387 kubelet[2661]: E0904 17:58:15.175308 2661 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f0db1dfa-f33e-43bf-98b0-16a182e9f9f9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ec6a9f72910d939ef57927dffa061b45449a7cc44cdd2f9a195d7acd6a58fe26\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:58:15.175387 kubelet[2661]: E0904 17:58:15.175345 2661 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f0db1dfa-f33e-43bf-98b0-16a182e9f9f9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ec6a9f72910d939ef57927dffa061b45449a7cc44cdd2f9a195d7acd6a58fe26\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-287hk" podUID="f0db1dfa-f33e-43bf-98b0-16a182e9f9f9" Sep 4 17:58:15.192462 containerd[1447]: time="2024-09-04T17:58:15.192390071Z" level=error msg="StopPodSandbox for \"cce266914840a1d211b368eb3853f7adc3e200e15098b220fc1d5da1284aec31\" failed" error="failed to destroy network for sandbox \"cce266914840a1d211b368eb3853f7adc3e200e15098b220fc1d5da1284aec31\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:58:15.192958 kubelet[2661]: E0904 17:58:15.192833 2661 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cce266914840a1d211b368eb3853f7adc3e200e15098b220fc1d5da1284aec31\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cce266914840a1d211b368eb3853f7adc3e200e15098b220fc1d5da1284aec31" Sep 4 17:58:15.192958 kubelet[2661]: E0904 17:58:15.192894 2661 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cce266914840a1d211b368eb3853f7adc3e200e15098b220fc1d5da1284aec31"} Sep 4 17:58:15.192958 kubelet[2661]: E0904 17:58:15.192938 2661 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d5415bb8-100d-4692-b66d-5875d35f5aef\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cce266914840a1d211b368eb3853f7adc3e200e15098b220fc1d5da1284aec31\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:58:15.193212 kubelet[2661]: E0904 17:58:15.192970 2661 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d5415bb8-100d-4692-b66d-5875d35f5aef\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cce266914840a1d211b368eb3853f7adc3e200e15098b220fc1d5da1284aec31\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-798cd9c48-zp2dz" podUID="d5415bb8-100d-4692-b66d-5875d35f5aef" Sep 4 17:58:15.198124 containerd[1447]: time="2024-09-04T17:58:15.197946178Z" level=error msg="StopPodSandbox for \"8956f21a2a02624f57561bba949223b27ce2149177fc4dcf847ea01dd454ea14\" failed" error="failed to destroy network for sandbox \"8956f21a2a02624f57561bba949223b27ce2149177fc4dcf847ea01dd454ea14\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:58:15.198413 kubelet[2661]: E0904 17:58:15.198326 2661 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8956f21a2a02624f57561bba949223b27ce2149177fc4dcf847ea01dd454ea14\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8956f21a2a02624f57561bba949223b27ce2149177fc4dcf847ea01dd454ea14" Sep 4 17:58:15.198507 kubelet[2661]: E0904 17:58:15.198413 2661 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8956f21a2a02624f57561bba949223b27ce2149177fc4dcf847ea01dd454ea14"} Sep 4 17:58:15.198507 kubelet[2661]: E0904 17:58:15.198482 2661 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"57139a4b-d4db-400d-8527-bd5a87379b62\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8956f21a2a02624f57561bba949223b27ce2149177fc4dcf847ea01dd454ea14\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:58:15.198627 kubelet[2661]: E0904 17:58:15.198516 2661 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"57139a4b-d4db-400d-8527-bd5a87379b62\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8956f21a2a02624f57561bba949223b27ce2149177fc4dcf847ea01dd454ea14\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-zzgpk" podUID="57139a4b-d4db-400d-8527-bd5a87379b62" Sep 4 17:58:15.200254 containerd[1447]: time="2024-09-04T17:58:15.200143579Z" level=error msg="StopPodSandbox for \"5907cbb1d3f8d792b51ba195422e07350eeac01d469c6556ac788d6b27c91ae4\" failed" error="failed to destroy network for sandbox \"5907cbb1d3f8d792b51ba195422e07350eeac01d469c6556ac788d6b27c91ae4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:58:15.200664 kubelet[2661]: E0904 17:58:15.200508 2661 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5907cbb1d3f8d792b51ba195422e07350eeac01d469c6556ac788d6b27c91ae4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5907cbb1d3f8d792b51ba195422e07350eeac01d469c6556ac788d6b27c91ae4" Sep 4 17:58:15.200664 kubelet[2661]: E0904 17:58:15.200563 2661 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5907cbb1d3f8d792b51ba195422e07350eeac01d469c6556ac788d6b27c91ae4"} Sep 4 17:58:15.200664 kubelet[2661]: E0904 17:58:15.200602 2661 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6007fafb-62cf-4f68-b42a-9d6500fa9f55\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5907cbb1d3f8d792b51ba195422e07350eeac01d469c6556ac788d6b27c91ae4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:58:15.200664 kubelet[2661]: E0904 17:58:15.200630 2661 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6007fafb-62cf-4f68-b42a-9d6500fa9f55\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5907cbb1d3f8d792b51ba195422e07350eeac01d469c6556ac788d6b27c91ae4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-bkmjk" podUID="6007fafb-62cf-4f68-b42a-9d6500fa9f55" Sep 4 17:58:22.135265 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2464973505.mount: Deactivated successfully. Sep 4 17:58:22.369102 containerd[1447]: time="2024-09-04T17:58:22.347754433Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:58:22.370811 containerd[1447]: time="2024-09-04T17:58:22.357802473Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.1: active requests=0, bytes read=117873564" Sep 4 17:58:22.372073 containerd[1447]: time="2024-09-04T17:58:22.371938630Z" level=info msg="ImageCreate event name:\"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:58:22.375378 containerd[1447]: time="2024-09-04T17:58:22.375276114Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:58:22.377092 containerd[1447]: time="2024-09-04T17:58:22.376945181Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.1\" with image id \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\", size \"117873426\" in 8.330812882s" Sep 4 17:58:22.377092 containerd[1447]: time="2024-09-04T17:58:22.377025770Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\" returns image reference \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\"" Sep 4 17:58:22.435826 containerd[1447]: time="2024-09-04T17:58:22.435609463Z" level=info msg="CreateContainer within sandbox \"e99f4c1271225ce2509c1c5b1f28ba4a2d302fd48ea851171fd8d4cfe46c7b31\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 4 17:58:22.508600 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount324548331.mount: Deactivated successfully. Sep 4 17:58:22.518818 containerd[1447]: time="2024-09-04T17:58:22.518740928Z" level=info msg="CreateContainer within sandbox \"e99f4c1271225ce2509c1c5b1f28ba4a2d302fd48ea851171fd8d4cfe46c7b31\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e47aa9161af6330cc25da1dda42bbb82423f77bee0c5077b5f62b98e41185375\"" Sep 4 17:58:22.521302 containerd[1447]: time="2024-09-04T17:58:22.520413552Z" level=info msg="StartContainer for \"e47aa9161af6330cc25da1dda42bbb82423f77bee0c5077b5f62b98e41185375\"" Sep 4 17:58:22.576658 systemd[1]: Started cri-containerd-e47aa9161af6330cc25da1dda42bbb82423f77bee0c5077b5f62b98e41185375.scope - libcontainer container e47aa9161af6330cc25da1dda42bbb82423f77bee0c5077b5f62b98e41185375. Sep 4 17:58:22.647152 containerd[1447]: time="2024-09-04T17:58:22.647088347Z" level=info msg="StartContainer for \"e47aa9161af6330cc25da1dda42bbb82423f77bee0c5077b5f62b98e41185375\" returns successfully" Sep 4 17:58:22.865157 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 4 17:58:22.865393 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 4 17:58:23.315162 systemd[1]: run-containerd-runc-k8s.io-e47aa9161af6330cc25da1dda42bbb82423f77bee0c5077b5f62b98e41185375-runc.y0WETB.mount: Deactivated successfully. Sep 4 17:58:24.179373 systemd[1]: run-containerd-runc-k8s.io-e47aa9161af6330cc25da1dda42bbb82423f77bee0c5077b5f62b98e41185375-runc.9h1N2k.mount: Deactivated successfully. Sep 4 17:58:25.161286 kernel: bpftool[3825]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 4 17:58:25.490797 systemd-networkd[1366]: vxlan.calico: Link UP Sep 4 17:58:25.490812 systemd-networkd[1366]: vxlan.calico: Gained carrier Sep 4 17:58:26.825657 systemd-networkd[1366]: vxlan.calico: Gained IPv6LL Sep 4 17:58:27.789576 containerd[1447]: time="2024-09-04T17:58:27.788693328Z" level=info msg="StopPodSandbox for \"5907cbb1d3f8d792b51ba195422e07350eeac01d469c6556ac788d6b27c91ae4\"" Sep 4 17:58:28.003774 kubelet[2661]: I0904 17:58:28.003607 2661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-mwzgk" podStartSLOduration=6.347519718 podStartE2EDuration="30.00355755s" podCreationTimestamp="2024-09-04 17:57:58 +0000 UTC" firstStartedPulling="2024-09-04 17:57:58.723123512 +0000 UTC m=+22.105033789" lastFinishedPulling="2024-09-04 17:58:22.379161303 +0000 UTC m=+45.761071621" observedRunningTime="2024-09-04 17:58:23.176588406 +0000 UTC m=+46.558498673" watchObservedRunningTime="2024-09-04 17:58:28.00355755 +0000 UTC m=+51.385467897" Sep 4 17:58:28.378564 containerd[1447]: 2024-09-04 17:58:27.953 [INFO][3906] k8s.go 608: Cleaning up netns ContainerID="5907cbb1d3f8d792b51ba195422e07350eeac01d469c6556ac788d6b27c91ae4" Sep 4 17:58:28.378564 containerd[1447]: 2024-09-04 17:58:27.961 [INFO][3906] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="5907cbb1d3f8d792b51ba195422e07350eeac01d469c6556ac788d6b27c91ae4" iface="eth0" netns="/var/run/netns/cni-6dfb8db0-87ca-cb1d-6c2a-6dc3ad313d5b" Sep 4 17:58:28.378564 containerd[1447]: 2024-09-04 17:58:27.962 [INFO][3906] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="5907cbb1d3f8d792b51ba195422e07350eeac01d469c6556ac788d6b27c91ae4" iface="eth0" netns="/var/run/netns/cni-6dfb8db0-87ca-cb1d-6c2a-6dc3ad313d5b" Sep 4 17:58:28.378564 containerd[1447]: 2024-09-04 17:58:27.969 [INFO][3906] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="5907cbb1d3f8d792b51ba195422e07350eeac01d469c6556ac788d6b27c91ae4" iface="eth0" netns="/var/run/netns/cni-6dfb8db0-87ca-cb1d-6c2a-6dc3ad313d5b" Sep 4 17:58:28.378564 containerd[1447]: 2024-09-04 17:58:27.969 [INFO][3906] k8s.go 615: Releasing IP address(es) ContainerID="5907cbb1d3f8d792b51ba195422e07350eeac01d469c6556ac788d6b27c91ae4" Sep 4 17:58:28.378564 containerd[1447]: 2024-09-04 17:58:27.969 [INFO][3906] utils.go 188: Calico CNI releasing IP address ContainerID="5907cbb1d3f8d792b51ba195422e07350eeac01d469c6556ac788d6b27c91ae4" Sep 4 17:58:28.378564 containerd[1447]: 2024-09-04 17:58:28.343 [INFO][3913] ipam_plugin.go 417: Releasing address using handleID ContainerID="5907cbb1d3f8d792b51ba195422e07350eeac01d469c6556ac788d6b27c91ae4" HandleID="k8s-pod-network.5907cbb1d3f8d792b51ba195422e07350eeac01d469c6556ac788d6b27c91ae4" Workload="ci--4054--1--0--2--9cde805234.novalocal-k8s-coredns--7db6d8ff4d--bkmjk-eth0" Sep 4 17:58:28.378564 containerd[1447]: 2024-09-04 17:58:28.345 [INFO][3913] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:58:28.378564 containerd[1447]: 2024-09-04 17:58:28.348 [INFO][3913] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:58:28.378564 containerd[1447]: 2024-09-04 17:58:28.370 [WARNING][3913] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="5907cbb1d3f8d792b51ba195422e07350eeac01d469c6556ac788d6b27c91ae4" HandleID="k8s-pod-network.5907cbb1d3f8d792b51ba195422e07350eeac01d469c6556ac788d6b27c91ae4" Workload="ci--4054--1--0--2--9cde805234.novalocal-k8s-coredns--7db6d8ff4d--bkmjk-eth0" Sep 4 17:58:28.378564 containerd[1447]: 2024-09-04 17:58:28.370 [INFO][3913] ipam_plugin.go 445: Releasing address using workloadID ContainerID="5907cbb1d3f8d792b51ba195422e07350eeac01d469c6556ac788d6b27c91ae4" HandleID="k8s-pod-network.5907cbb1d3f8d792b51ba195422e07350eeac01d469c6556ac788d6b27c91ae4" Workload="ci--4054--1--0--2--9cde805234.novalocal-k8s-coredns--7db6d8ff4d--bkmjk-eth0" Sep 4 17:58:28.378564 containerd[1447]: 2024-09-04 17:58:28.372 [INFO][3913] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:58:28.378564 containerd[1447]: 2024-09-04 17:58:28.376 [INFO][3906] k8s.go 621: Teardown processing complete. ContainerID="5907cbb1d3f8d792b51ba195422e07350eeac01d469c6556ac788d6b27c91ae4" Sep 4 17:58:28.384193 containerd[1447]: time="2024-09-04T17:58:28.378768447Z" level=info msg="TearDown network for sandbox \"5907cbb1d3f8d792b51ba195422e07350eeac01d469c6556ac788d6b27c91ae4\" successfully" Sep 4 17:58:28.384193 containerd[1447]: time="2024-09-04T17:58:28.378801531Z" level=info msg="StopPodSandbox for \"5907cbb1d3f8d792b51ba195422e07350eeac01d469c6556ac788d6b27c91ae4\" returns successfully" Sep 4 17:58:28.386949 systemd[1]: run-netns-cni\x2d6dfb8db0\x2d87ca\x2dcb1d\x2d6c2a\x2d6dc3ad313d5b.mount: Deactivated successfully. Sep 4 17:58:28.389413 containerd[1447]: time="2024-09-04T17:58:28.388338033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bkmjk,Uid:6007fafb-62cf-4f68-b42a-9d6500fa9f55,Namespace:kube-system,Attempt:1,}" Sep 4 17:58:28.564992 systemd-networkd[1366]: cali5f25b8c6abe: Link UP Sep 4 17:58:28.565182 systemd-networkd[1366]: cali5f25b8c6abe: Gained carrier Sep 4 17:58:28.591393 containerd[1447]: 2024-09-04 17:58:28.460 [INFO][3920] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4054--1--0--2--9cde805234.novalocal-k8s-coredns--7db6d8ff4d--bkmjk-eth0 coredns-7db6d8ff4d- kube-system 6007fafb-62cf-4f68-b42a-9d6500fa9f55 700 0 2024-09-04 17:57:51 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4054-1-0-2-9cde805234.novalocal coredns-7db6d8ff4d-bkmjk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5f25b8c6abe [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="860d595e60f6bb6d40bfefce0b13ba2be4e6c4637f0ed2a0c9a0cafb08d54fa4" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bkmjk" WorkloadEndpoint="ci--4054--1--0--2--9cde805234.novalocal-k8s-coredns--7db6d8ff4d--bkmjk-" Sep 4 17:58:28.591393 containerd[1447]: 2024-09-04 17:58:28.460 [INFO][3920] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="860d595e60f6bb6d40bfefce0b13ba2be4e6c4637f0ed2a0c9a0cafb08d54fa4" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bkmjk" WorkloadEndpoint="ci--4054--1--0--2--9cde805234.novalocal-k8s-coredns--7db6d8ff4d--bkmjk-eth0" Sep 4 17:58:28.591393 containerd[1447]: 2024-09-04 17:58:28.506 [INFO][3932] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="860d595e60f6bb6d40bfefce0b13ba2be4e6c4637f0ed2a0c9a0cafb08d54fa4" HandleID="k8s-pod-network.860d595e60f6bb6d40bfefce0b13ba2be4e6c4637f0ed2a0c9a0cafb08d54fa4" Workload="ci--4054--1--0--2--9cde805234.novalocal-k8s-coredns--7db6d8ff4d--bkmjk-eth0" Sep 4 17:58:28.591393 containerd[1447]: 2024-09-04 17:58:28.521 [INFO][3932] ipam_plugin.go 270: Auto assigning IP ContainerID="860d595e60f6bb6d40bfefce0b13ba2be4e6c4637f0ed2a0c9a0cafb08d54fa4" HandleID="k8s-pod-network.860d595e60f6bb6d40bfefce0b13ba2be4e6c4637f0ed2a0c9a0cafb08d54fa4" Workload="ci--4054--1--0--2--9cde805234.novalocal-k8s-coredns--7db6d8ff4d--bkmjk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00030a350), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4054-1-0-2-9cde805234.novalocal", "pod":"coredns-7db6d8ff4d-bkmjk", "timestamp":"2024-09-04 17:58:28.506847277 +0000 UTC"}, Hostname:"ci-4054-1-0-2-9cde805234.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:58:28.591393 containerd[1447]: 2024-09-04 17:58:28.521 [INFO][3932] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:58:28.591393 containerd[1447]: 2024-09-04 17:58:28.521 [INFO][3932] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:58:28.591393 containerd[1447]: 2024-09-04 17:58:28.521 [INFO][3932] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4054-1-0-2-9cde805234.novalocal' Sep 4 17:58:28.591393 containerd[1447]: 2024-09-04 17:58:28.524 [INFO][3932] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.860d595e60f6bb6d40bfefce0b13ba2be4e6c4637f0ed2a0c9a0cafb08d54fa4" host="ci-4054-1-0-2-9cde805234.novalocal" Sep 4 17:58:28.591393 containerd[1447]: 2024-09-04 17:58:28.536 [INFO][3932] ipam.go 372: Looking up existing affinities for host host="ci-4054-1-0-2-9cde805234.novalocal" Sep 4 17:58:28.591393 containerd[1447]: 2024-09-04 17:58:28.542 [INFO][3932] ipam.go 489: Trying affinity for 192.168.50.192/26 host="ci-4054-1-0-2-9cde805234.novalocal" Sep 4 17:58:28.591393 containerd[1447]: 2024-09-04 17:58:28.544 [INFO][3932] ipam.go 155: Attempting to load block cidr=192.168.50.192/26 host="ci-4054-1-0-2-9cde805234.novalocal" Sep 4 17:58:28.591393 containerd[1447]: 2024-09-04 17:58:28.546 [INFO][3932] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.50.192/26 host="ci-4054-1-0-2-9cde805234.novalocal" Sep 4 17:58:28.591393 containerd[1447]: 2024-09-04 17:58:28.546 [INFO][3932] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.50.192/26 handle="k8s-pod-network.860d595e60f6bb6d40bfefce0b13ba2be4e6c4637f0ed2a0c9a0cafb08d54fa4" host="ci-4054-1-0-2-9cde805234.novalocal" Sep 4 17:58:28.591393 containerd[1447]: 2024-09-04 17:58:28.548 [INFO][3932] ipam.go 1685: Creating new handle: k8s-pod-network.860d595e60f6bb6d40bfefce0b13ba2be4e6c4637f0ed2a0c9a0cafb08d54fa4 Sep 4 17:58:28.591393 containerd[1447]: 2024-09-04 17:58:28.552 [INFO][3932] ipam.go 1203: Writing block in order to claim IPs block=192.168.50.192/26 handle="k8s-pod-network.860d595e60f6bb6d40bfefce0b13ba2be4e6c4637f0ed2a0c9a0cafb08d54fa4" host="ci-4054-1-0-2-9cde805234.novalocal" Sep 4 17:58:28.591393 containerd[1447]: 2024-09-04 17:58:28.557 [INFO][3932] ipam.go 1216: Successfully claimed IPs: [192.168.50.193/26] block=192.168.50.192/26 handle="k8s-pod-network.860d595e60f6bb6d40bfefce0b13ba2be4e6c4637f0ed2a0c9a0cafb08d54fa4" host="ci-4054-1-0-2-9cde805234.novalocal" Sep 4 17:58:28.591393 containerd[1447]: 2024-09-04 17:58:28.557 [INFO][3932] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.50.193/26] handle="k8s-pod-network.860d595e60f6bb6d40bfefce0b13ba2be4e6c4637f0ed2a0c9a0cafb08d54fa4" host="ci-4054-1-0-2-9cde805234.novalocal" Sep 4 17:58:28.591393 containerd[1447]: 2024-09-04 17:58:28.557 [INFO][3932] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:58:28.591393 containerd[1447]: 2024-09-04 17:58:28.557 [INFO][3932] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.50.193/26] IPv6=[] ContainerID="860d595e60f6bb6d40bfefce0b13ba2be4e6c4637f0ed2a0c9a0cafb08d54fa4" HandleID="k8s-pod-network.860d595e60f6bb6d40bfefce0b13ba2be4e6c4637f0ed2a0c9a0cafb08d54fa4" Workload="ci--4054--1--0--2--9cde805234.novalocal-k8s-coredns--7db6d8ff4d--bkmjk-eth0" Sep 4 17:58:28.594935 containerd[1447]: 2024-09-04 17:58:28.561 [INFO][3920] k8s.go 386: Populated endpoint ContainerID="860d595e60f6bb6d40bfefce0b13ba2be4e6c4637f0ed2a0c9a0cafb08d54fa4" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bkmjk" WorkloadEndpoint="ci--4054--1--0--2--9cde805234.novalocal-k8s-coredns--7db6d8ff4d--bkmjk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054--1--0--2--9cde805234.novalocal-k8s-coredns--7db6d8ff4d--bkmjk-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"6007fafb-62cf-4f68-b42a-9d6500fa9f55", ResourceVersion:"700", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 57, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054-1-0-2-9cde805234.novalocal", ContainerID:"", Pod:"coredns-7db6d8ff4d-bkmjk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5f25b8c6abe", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:58:28.594935 containerd[1447]: 2024-09-04 17:58:28.561 [INFO][3920] k8s.go 387: Calico CNI using IPs: [192.168.50.193/32] ContainerID="860d595e60f6bb6d40bfefce0b13ba2be4e6c4637f0ed2a0c9a0cafb08d54fa4" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bkmjk" WorkloadEndpoint="ci--4054--1--0--2--9cde805234.novalocal-k8s-coredns--7db6d8ff4d--bkmjk-eth0" Sep 4 17:58:28.594935 containerd[1447]: 2024-09-04 17:58:28.561 [INFO][3920] dataplane_linux.go 68: Setting the host side veth name to cali5f25b8c6abe ContainerID="860d595e60f6bb6d40bfefce0b13ba2be4e6c4637f0ed2a0c9a0cafb08d54fa4" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bkmjk" WorkloadEndpoint="ci--4054--1--0--2--9cde805234.novalocal-k8s-coredns--7db6d8ff4d--bkmjk-eth0" Sep 4 17:58:28.594935 containerd[1447]: 2024-09-04 17:58:28.563 [INFO][3920] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="860d595e60f6bb6d40bfefce0b13ba2be4e6c4637f0ed2a0c9a0cafb08d54fa4" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bkmjk" WorkloadEndpoint="ci--4054--1--0--2--9cde805234.novalocal-k8s-coredns--7db6d8ff4d--bkmjk-eth0" Sep 4 17:58:28.594935 containerd[1447]: 2024-09-04 17:58:28.563 [INFO][3920] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="860d595e60f6bb6d40bfefce0b13ba2be4e6c4637f0ed2a0c9a0cafb08d54fa4" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bkmjk" WorkloadEndpoint="ci--4054--1--0--2--9cde805234.novalocal-k8s-coredns--7db6d8ff4d--bkmjk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054--1--0--2--9cde805234.novalocal-k8s-coredns--7db6d8ff4d--bkmjk-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"6007fafb-62cf-4f68-b42a-9d6500fa9f55", ResourceVersion:"700", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 57, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054-1-0-2-9cde805234.novalocal", ContainerID:"860d595e60f6bb6d40bfefce0b13ba2be4e6c4637f0ed2a0c9a0cafb08d54fa4", Pod:"coredns-7db6d8ff4d-bkmjk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5f25b8c6abe", MAC:"de:f7:bc:16:29:bd", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:58:28.594935 containerd[1447]: 2024-09-04 17:58:28.585 [INFO][3920] k8s.go 500: Wrote updated endpoint to datastore ContainerID="860d595e60f6bb6d40bfefce0b13ba2be4e6c4637f0ed2a0c9a0cafb08d54fa4" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bkmjk" WorkloadEndpoint="ci--4054--1--0--2--9cde805234.novalocal-k8s-coredns--7db6d8ff4d--bkmjk-eth0" Sep 4 17:58:28.624793 containerd[1447]: time="2024-09-04T17:58:28.624689084Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:58:28.625759 containerd[1447]: time="2024-09-04T17:58:28.625726775Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:58:28.625913 containerd[1447]: time="2024-09-04T17:58:28.625888812Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:58:28.626225 containerd[1447]: time="2024-09-04T17:58:28.626198888Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:58:28.678652 systemd[1]: Started cri-containerd-860d595e60f6bb6d40bfefce0b13ba2be4e6c4637f0ed2a0c9a0cafb08d54fa4.scope - libcontainer container 860d595e60f6bb6d40bfefce0b13ba2be4e6c4637f0ed2a0c9a0cafb08d54fa4. Sep 4 17:58:28.754234 containerd[1447]: time="2024-09-04T17:58:28.754177622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bkmjk,Uid:6007fafb-62cf-4f68-b42a-9d6500fa9f55,Namespace:kube-system,Attempt:1,} returns sandbox id \"860d595e60f6bb6d40bfefce0b13ba2be4e6c4637f0ed2a0c9a0cafb08d54fa4\"" Sep 4 17:58:28.803706 containerd[1447]: time="2024-09-04T17:58:28.802213988Z" level=info msg="CreateContainer within sandbox \"860d595e60f6bb6d40bfefce0b13ba2be4e6c4637f0ed2a0c9a0cafb08d54fa4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 17:58:28.837759 containerd[1447]: time="2024-09-04T17:58:28.837718021Z" level=info msg="CreateContainer within sandbox \"860d595e60f6bb6d40bfefce0b13ba2be4e6c4637f0ed2a0c9a0cafb08d54fa4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f9b06acaaa1865957d7950813a831706e0a7dda5b0db42cec514a4db5054fab0\"" Sep 4 17:58:28.839271 containerd[1447]: time="2024-09-04T17:58:28.838405867Z" level=info msg="StartContainer for \"f9b06acaaa1865957d7950813a831706e0a7dda5b0db42cec514a4db5054fab0\"" Sep 4 17:58:28.898459 systemd[1]: Started cri-containerd-f9b06acaaa1865957d7950813a831706e0a7dda5b0db42cec514a4db5054fab0.scope - libcontainer container f9b06acaaa1865957d7950813a831706e0a7dda5b0db42cec514a4db5054fab0. Sep 4 17:58:28.960359 containerd[1447]: time="2024-09-04T17:58:28.960183466Z" level=info msg="StartContainer for \"f9b06acaaa1865957d7950813a831706e0a7dda5b0db42cec514a4db5054fab0\" returns successfully" Sep 4 17:58:29.177147 kubelet[2661]: I0904 17:58:29.177071 2661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-bkmjk" podStartSLOduration=38.177049031 podStartE2EDuration="38.177049031s" podCreationTimestamp="2024-09-04 17:57:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:58:29.175648772 +0000 UTC m=+52.557559039" watchObservedRunningTime="2024-09-04 17:58:29.177049031 +0000 UTC m=+52.558959308" Sep 4 17:58:29.790273 containerd[1447]: time="2024-09-04T17:58:29.789567099Z" level=info msg="StopPodSandbox for \"cce266914840a1d211b368eb3853f7adc3e200e15098b220fc1d5da1284aec31\"" Sep 4 17:58:29.790601 containerd[1447]: time="2024-09-04T17:58:29.790566143Z" level=info msg="StopPodSandbox for \"ec6a9f72910d939ef57927dffa061b45449a7cc44cdd2f9a195d7acd6a58fe26\"" Sep 4 17:58:29.964067 containerd[1447]: 2024-09-04 17:58:29.897 [INFO][4053] k8s.go 608: Cleaning up netns ContainerID="cce266914840a1d211b368eb3853f7adc3e200e15098b220fc1d5da1284aec31" Sep 4 17:58:29.964067 containerd[1447]: 2024-09-04 17:58:29.897 [INFO][4053] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="cce266914840a1d211b368eb3853f7adc3e200e15098b220fc1d5da1284aec31" iface="eth0" netns="/var/run/netns/cni-eba68171-b37c-e35a-819a-25faa11f9a65" Sep 4 17:58:29.964067 containerd[1447]: 2024-09-04 17:58:29.898 [INFO][4053] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="cce266914840a1d211b368eb3853f7adc3e200e15098b220fc1d5da1284aec31" iface="eth0" netns="/var/run/netns/cni-eba68171-b37c-e35a-819a-25faa11f9a65" Sep 4 17:58:29.964067 containerd[1447]: 2024-09-04 17:58:29.898 [INFO][4053] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="cce266914840a1d211b368eb3853f7adc3e200e15098b220fc1d5da1284aec31" iface="eth0" netns="/var/run/netns/cni-eba68171-b37c-e35a-819a-25faa11f9a65" Sep 4 17:58:29.964067 containerd[1447]: 2024-09-04 17:58:29.898 [INFO][4053] k8s.go 615: Releasing IP address(es) ContainerID="cce266914840a1d211b368eb3853f7adc3e200e15098b220fc1d5da1284aec31" Sep 4 17:58:29.964067 containerd[1447]: 2024-09-04 17:58:29.898 [INFO][4053] utils.go 188: Calico CNI releasing IP address ContainerID="cce266914840a1d211b368eb3853f7adc3e200e15098b220fc1d5da1284aec31" Sep 4 17:58:29.964067 containerd[1447]: 2024-09-04 17:58:29.946 [INFO][4072] ipam_plugin.go 417: Releasing address using handleID ContainerID="cce266914840a1d211b368eb3853f7adc3e200e15098b220fc1d5da1284aec31" HandleID="k8s-pod-network.cce266914840a1d211b368eb3853f7adc3e200e15098b220fc1d5da1284aec31" Workload="ci--4054--1--0--2--9cde805234.novalocal-k8s-calico--kube--controllers--798cd9c48--zp2dz-eth0" Sep 4 17:58:29.964067 containerd[1447]: 2024-09-04 17:58:29.946 [INFO][4072] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:58:29.964067 containerd[1447]: 2024-09-04 17:58:29.946 [INFO][4072] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:58:29.964067 containerd[1447]: 2024-09-04 17:58:29.957 [WARNING][4072] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="cce266914840a1d211b368eb3853f7adc3e200e15098b220fc1d5da1284aec31" HandleID="k8s-pod-network.cce266914840a1d211b368eb3853f7adc3e200e15098b220fc1d5da1284aec31" Workload="ci--4054--1--0--2--9cde805234.novalocal-k8s-calico--kube--controllers--798cd9c48--zp2dz-eth0" Sep 4 17:58:29.964067 containerd[1447]: 2024-09-04 17:58:29.957 [INFO][4072] ipam_plugin.go 445: Releasing address using workloadID ContainerID="cce266914840a1d211b368eb3853f7adc3e200e15098b220fc1d5da1284aec31" HandleID="k8s-pod-network.cce266914840a1d211b368eb3853f7adc3e200e15098b220fc1d5da1284aec31" Workload="ci--4054--1--0--2--9cde805234.novalocal-k8s-calico--kube--controllers--798cd9c48--zp2dz-eth0" Sep 4 17:58:29.964067 containerd[1447]: 2024-09-04 17:58:29.959 [INFO][4072] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:58:29.964067 containerd[1447]: 2024-09-04 17:58:29.961 [INFO][4053] k8s.go 621: Teardown processing complete. ContainerID="cce266914840a1d211b368eb3853f7adc3e200e15098b220fc1d5da1284aec31" Sep 4 17:58:29.965485 containerd[1447]: time="2024-09-04T17:58:29.965350337Z" level=info msg="TearDown network for sandbox \"cce266914840a1d211b368eb3853f7adc3e200e15098b220fc1d5da1284aec31\" successfully" Sep 4 17:58:29.965485 containerd[1447]: time="2024-09-04T17:58:29.965379453Z" level=info msg="StopPodSandbox for \"cce266914840a1d211b368eb3853f7adc3e200e15098b220fc1d5da1284aec31\" returns successfully" Sep 4 17:58:29.968376 systemd[1]: run-netns-cni\x2deba68171\x2db37c\x2de35a\x2d819a\x2d25faa11f9a65.mount: Deactivated successfully. Sep 4 17:58:29.973947 containerd[1447]: time="2024-09-04T17:58:29.973531527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-798cd9c48-zp2dz,Uid:d5415bb8-100d-4692-b66d-5875d35f5aef,Namespace:calico-system,Attempt:1,}" Sep 4 17:58:29.990809 containerd[1447]: 2024-09-04 17:58:29.891 [INFO][4061] k8s.go 608: Cleaning up netns ContainerID="ec6a9f72910d939ef57927dffa061b45449a7cc44cdd2f9a195d7acd6a58fe26" Sep 4 17:58:29.990809 containerd[1447]: 2024-09-04 17:58:29.893 [INFO][4061] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="ec6a9f72910d939ef57927dffa061b45449a7cc44cdd2f9a195d7acd6a58fe26" iface="eth0" netns="/var/run/netns/cni-1c96037b-2c65-9ce5-c72c-48a224298323" Sep 4 17:58:29.990809 containerd[1447]: 2024-09-04 17:58:29.893 [INFO][4061] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="ec6a9f72910d939ef57927dffa061b45449a7cc44cdd2f9a195d7acd6a58fe26" iface="eth0" netns="/var/run/netns/cni-1c96037b-2c65-9ce5-c72c-48a224298323" Sep 4 17:58:29.990809 containerd[1447]: 2024-09-04 17:58:29.893 [INFO][4061] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="ec6a9f72910d939ef57927dffa061b45449a7cc44cdd2f9a195d7acd6a58fe26" iface="eth0" netns="/var/run/netns/cni-1c96037b-2c65-9ce5-c72c-48a224298323" Sep 4 17:58:29.990809 containerd[1447]: 2024-09-04 17:58:29.893 [INFO][4061] k8s.go 615: Releasing IP address(es) ContainerID="ec6a9f72910d939ef57927dffa061b45449a7cc44cdd2f9a195d7acd6a58fe26" Sep 4 17:58:29.990809 containerd[1447]: 2024-09-04 17:58:29.893 [INFO][4061] utils.go 188: Calico CNI releasing IP address ContainerID="ec6a9f72910d939ef57927dffa061b45449a7cc44cdd2f9a195d7acd6a58fe26" Sep 4 17:58:29.990809 containerd[1447]: 2024-09-04 17:58:29.947 [INFO][4071] ipam_plugin.go 417: Releasing address using handleID ContainerID="ec6a9f72910d939ef57927dffa061b45449a7cc44cdd2f9a195d7acd6a58fe26" HandleID="k8s-pod-network.ec6a9f72910d939ef57927dffa061b45449a7cc44cdd2f9a195d7acd6a58fe26" Workload="ci--4054--1--0--2--9cde805234.novalocal-k8s-csi--node--driver--287hk-eth0" Sep 4 17:58:29.990809 containerd[1447]: 2024-09-04 17:58:29.949 [INFO][4071] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:58:29.990809 containerd[1447]: 2024-09-04 17:58:29.959 [INFO][4071] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:58:29.990809 containerd[1447]: 2024-09-04 17:58:29.978 [WARNING][4071] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="ec6a9f72910d939ef57927dffa061b45449a7cc44cdd2f9a195d7acd6a58fe26" HandleID="k8s-pod-network.ec6a9f72910d939ef57927dffa061b45449a7cc44cdd2f9a195d7acd6a58fe26" Workload="ci--4054--1--0--2--9cde805234.novalocal-k8s-csi--node--driver--287hk-eth0" Sep 4 17:58:29.990809 containerd[1447]: 2024-09-04 17:58:29.979 [INFO][4071] ipam_plugin.go 445: Releasing address using workloadID ContainerID="ec6a9f72910d939ef57927dffa061b45449a7cc44cdd2f9a195d7acd6a58fe26" HandleID="k8s-pod-network.ec6a9f72910d939ef57927dffa061b45449a7cc44cdd2f9a195d7acd6a58fe26" Workload="ci--4054--1--0--2--9cde805234.novalocal-k8s-csi--node--driver--287hk-eth0" Sep 4 17:58:29.990809 containerd[1447]: 2024-09-04 17:58:29.982 [INFO][4071] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:58:29.990809 containerd[1447]: 2024-09-04 17:58:29.985 [INFO][4061] k8s.go 621: Teardown processing complete. ContainerID="ec6a9f72910d939ef57927dffa061b45449a7cc44cdd2f9a195d7acd6a58fe26" Sep 4 17:58:29.991971 containerd[1447]: time="2024-09-04T17:58:29.991777133Z" level=info msg="TearDown network for sandbox \"ec6a9f72910d939ef57927dffa061b45449a7cc44cdd2f9a195d7acd6a58fe26\" successfully" Sep 4 17:58:29.991971 containerd[1447]: time="2024-09-04T17:58:29.991835317Z" level=info msg="StopPodSandbox for \"ec6a9f72910d939ef57927dffa061b45449a7cc44cdd2f9a195d7acd6a58fe26\" returns successfully" Sep 4 17:58:29.993312 containerd[1447]: time="2024-09-04T17:58:29.992568632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-287hk,Uid:f0db1dfa-f33e-43bf-98b0-16a182e9f9f9,Namespace:calico-system,Attempt:1,}" Sep 4 17:58:29.997217 systemd[1]: run-netns-cni\x2d1c96037b\x2d2c65\x2d9ce5\x2dc72c\x2d48a224298323.mount: Deactivated successfully. Sep 4 17:58:30.217691 systemd-networkd[1366]: cali5f25b8c6abe: Gained IPv6LL Sep 4 17:58:30.349825 systemd-networkd[1366]: cali5aa53c460eb: Link UP Sep 4 17:58:30.352776 systemd-networkd[1366]: cali5aa53c460eb: Gained carrier Sep 4 17:58:30.424562 containerd[1447]: 2024-09-04 17:58:30.085 [INFO][4083] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4054--1--0--2--9cde805234.novalocal-k8s-calico--kube--controllers--798cd9c48--zp2dz-eth0 calico-kube-controllers-798cd9c48- calico-system d5415bb8-100d-4692-b66d-5875d35f5aef 718 0 2024-09-04 17:57:58 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:798cd9c48 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4054-1-0-2-9cde805234.novalocal calico-kube-controllers-798cd9c48-zp2dz eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali5aa53c460eb [] []}} ContainerID="3660227b468104fd4a9fe2a67ff27f67c8e6a487b4d38f657caa4d321f14444c" Namespace="calico-system" Pod="calico-kube-controllers-798cd9c48-zp2dz" WorkloadEndpoint="ci--4054--1--0--2--9cde805234.novalocal-k8s-calico--kube--controllers--798cd9c48--zp2dz-" Sep 4 17:58:30.424562 containerd[1447]: 2024-09-04 17:58:30.086 [INFO][4083] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3660227b468104fd4a9fe2a67ff27f67c8e6a487b4d38f657caa4d321f14444c" Namespace="calico-system" Pod="calico-kube-controllers-798cd9c48-zp2dz" WorkloadEndpoint="ci--4054--1--0--2--9cde805234.novalocal-k8s-calico--kube--controllers--798cd9c48--zp2dz-eth0" Sep 4 17:58:30.424562 containerd[1447]: 2024-09-04 17:58:30.153 [INFO][4104] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3660227b468104fd4a9fe2a67ff27f67c8e6a487b4d38f657caa4d321f14444c" HandleID="k8s-pod-network.3660227b468104fd4a9fe2a67ff27f67c8e6a487b4d38f657caa4d321f14444c" Workload="ci--4054--1--0--2--9cde805234.novalocal-k8s-calico--kube--controllers--798cd9c48--zp2dz-eth0" Sep 4 17:58:30.424562 containerd[1447]: 2024-09-04 17:58:30.172 [INFO][4104] ipam_plugin.go 270: Auto assigning IP ContainerID="3660227b468104fd4a9fe2a67ff27f67c8e6a487b4d38f657caa4d321f14444c" HandleID="k8s-pod-network.3660227b468104fd4a9fe2a67ff27f67c8e6a487b4d38f657caa4d321f14444c" Workload="ci--4054--1--0--2--9cde805234.novalocal-k8s-calico--kube--controllers--798cd9c48--zp2dz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ee050), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4054-1-0-2-9cde805234.novalocal", "pod":"calico-kube-controllers-798cd9c48-zp2dz", "timestamp":"2024-09-04 17:58:30.153219121 +0000 UTC"}, Hostname:"ci-4054-1-0-2-9cde805234.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:58:30.424562 containerd[1447]: 2024-09-04 17:58:30.172 [INFO][4104] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:58:30.424562 containerd[1447]: 2024-09-04 17:58:30.172 [INFO][4104] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:58:30.424562 containerd[1447]: 2024-09-04 17:58:30.172 [INFO][4104] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4054-1-0-2-9cde805234.novalocal' Sep 4 17:58:30.424562 containerd[1447]: 2024-09-04 17:58:30.176 [INFO][4104] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3660227b468104fd4a9fe2a67ff27f67c8e6a487b4d38f657caa4d321f14444c" host="ci-4054-1-0-2-9cde805234.novalocal" Sep 4 17:58:30.424562 containerd[1447]: 2024-09-04 17:58:30.194 [INFO][4104] ipam.go 372: Looking up existing affinities for host host="ci-4054-1-0-2-9cde805234.novalocal" Sep 4 17:58:30.424562 containerd[1447]: 2024-09-04 17:58:30.205 [INFO][4104] ipam.go 489: Trying affinity for 192.168.50.192/26 host="ci-4054-1-0-2-9cde805234.novalocal" Sep 4 17:58:30.424562 containerd[1447]: 2024-09-04 17:58:30.211 [INFO][4104] ipam.go 155: Attempting to load block cidr=192.168.50.192/26 host="ci-4054-1-0-2-9cde805234.novalocal" Sep 4 17:58:30.424562 containerd[1447]: 2024-09-04 17:58:30.215 [INFO][4104] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.50.192/26 host="ci-4054-1-0-2-9cde805234.novalocal" Sep 4 17:58:30.424562 containerd[1447]: 2024-09-04 17:58:30.215 [INFO][4104] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.50.192/26 handle="k8s-pod-network.3660227b468104fd4a9fe2a67ff27f67c8e6a487b4d38f657caa4d321f14444c" host="ci-4054-1-0-2-9cde805234.novalocal" Sep 4 17:58:30.424562 containerd[1447]: 2024-09-04 17:58:30.220 [INFO][4104] ipam.go 1685: Creating new handle: k8s-pod-network.3660227b468104fd4a9fe2a67ff27f67c8e6a487b4d38f657caa4d321f14444c Sep 4 17:58:30.424562 containerd[1447]: 2024-09-04 17:58:30.227 [INFO][4104] ipam.go 1203: Writing block in order to claim IPs block=192.168.50.192/26 handle="k8s-pod-network.3660227b468104fd4a9fe2a67ff27f67c8e6a487b4d38f657caa4d321f14444c" host="ci-4054-1-0-2-9cde805234.novalocal" Sep 4 17:58:30.424562 containerd[1447]: 2024-09-04 17:58:30.332 [INFO][4104] ipam.go 1216: Successfully claimed IPs: [192.168.50.194/26] block=192.168.50.192/26 handle="k8s-pod-network.3660227b468104fd4a9fe2a67ff27f67c8e6a487b4d38f657caa4d321f14444c" host="ci-4054-1-0-2-9cde805234.novalocal" Sep 4 17:58:30.424562 containerd[1447]: 2024-09-04 17:58:30.333 [INFO][4104] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.50.194/26] handle="k8s-pod-network.3660227b468104fd4a9fe2a67ff27f67c8e6a487b4d38f657caa4d321f14444c" host="ci-4054-1-0-2-9cde805234.novalocal" Sep 4 17:58:30.424562 containerd[1447]: 2024-09-04 17:58:30.333 [INFO][4104] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:58:30.424562 containerd[1447]: 2024-09-04 17:58:30.333 [INFO][4104] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.50.194/26] IPv6=[] ContainerID="3660227b468104fd4a9fe2a67ff27f67c8e6a487b4d38f657caa4d321f14444c" HandleID="k8s-pod-network.3660227b468104fd4a9fe2a67ff27f67c8e6a487b4d38f657caa4d321f14444c" Workload="ci--4054--1--0--2--9cde805234.novalocal-k8s-calico--kube--controllers--798cd9c48--zp2dz-eth0" Sep 4 17:58:30.427732 containerd[1447]: 2024-09-04 17:58:30.340 [INFO][4083] k8s.go 386: Populated endpoint ContainerID="3660227b468104fd4a9fe2a67ff27f67c8e6a487b4d38f657caa4d321f14444c" Namespace="calico-system" Pod="calico-kube-controllers-798cd9c48-zp2dz" WorkloadEndpoint="ci--4054--1--0--2--9cde805234.novalocal-k8s-calico--kube--controllers--798cd9c48--zp2dz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054--1--0--2--9cde805234.novalocal-k8s-calico--kube--controllers--798cd9c48--zp2dz-eth0", GenerateName:"calico-kube-controllers-798cd9c48-", Namespace:"calico-system", SelfLink:"", UID:"d5415bb8-100d-4692-b66d-5875d35f5aef", ResourceVersion:"718", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 57, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"798cd9c48", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054-1-0-2-9cde805234.novalocal", ContainerID:"", Pod:"calico-kube-controllers-798cd9c48-zp2dz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.50.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5aa53c460eb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:58:30.427732 containerd[1447]: 2024-09-04 17:58:30.341 [INFO][4083] k8s.go 387: Calico CNI using IPs: [192.168.50.194/32] ContainerID="3660227b468104fd4a9fe2a67ff27f67c8e6a487b4d38f657caa4d321f14444c" Namespace="calico-system" Pod="calico-kube-controllers-798cd9c48-zp2dz" WorkloadEndpoint="ci--4054--1--0--2--9cde805234.novalocal-k8s-calico--kube--controllers--798cd9c48--zp2dz-eth0" Sep 4 17:58:30.427732 containerd[1447]: 2024-09-04 17:58:30.341 [INFO][4083] dataplane_linux.go 68: Setting the host side veth name to cali5aa53c460eb ContainerID="3660227b468104fd4a9fe2a67ff27f67c8e6a487b4d38f657caa4d321f14444c" Namespace="calico-system" Pod="calico-kube-controllers-798cd9c48-zp2dz" WorkloadEndpoint="ci--4054--1--0--2--9cde805234.novalocal-k8s-calico--kube--controllers--798cd9c48--zp2dz-eth0" Sep 4 17:58:30.427732 containerd[1447]: 2024-09-04 17:58:30.351 [INFO][4083] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="3660227b468104fd4a9fe2a67ff27f67c8e6a487b4d38f657caa4d321f14444c" Namespace="calico-system" Pod="calico-kube-controllers-798cd9c48-zp2dz" WorkloadEndpoint="ci--4054--1--0--2--9cde805234.novalocal-k8s-calico--kube--controllers--798cd9c48--zp2dz-eth0" Sep 4 17:58:30.427732 containerd[1447]: 2024-09-04 17:58:30.352 [INFO][4083] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3660227b468104fd4a9fe2a67ff27f67c8e6a487b4d38f657caa4d321f14444c" Namespace="calico-system" Pod="calico-kube-controllers-798cd9c48-zp2dz" WorkloadEndpoint="ci--4054--1--0--2--9cde805234.novalocal-k8s-calico--kube--controllers--798cd9c48--zp2dz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054--1--0--2--9cde805234.novalocal-k8s-calico--kube--controllers--798cd9c48--zp2dz-eth0", GenerateName:"calico-kube-controllers-798cd9c48-", Namespace:"calico-system", SelfLink:"", UID:"d5415bb8-100d-4692-b66d-5875d35f5aef", ResourceVersion:"718", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 57, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"798cd9c48", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054-1-0-2-9cde805234.novalocal", ContainerID:"3660227b468104fd4a9fe2a67ff27f67c8e6a487b4d38f657caa4d321f14444c", Pod:"calico-kube-controllers-798cd9c48-zp2dz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.50.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5aa53c460eb", MAC:"92:76:7e:d7:b1:0e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:58:30.427732 containerd[1447]: 2024-09-04 17:58:30.407 [INFO][4083] k8s.go 500: Wrote updated endpoint to datastore ContainerID="3660227b468104fd4a9fe2a67ff27f67c8e6a487b4d38f657caa4d321f14444c" Namespace="calico-system" Pod="calico-kube-controllers-798cd9c48-zp2dz" WorkloadEndpoint="ci--4054--1--0--2--9cde805234.novalocal-k8s-calico--kube--controllers--798cd9c48--zp2dz-eth0" Sep 4 17:58:30.642446 containerd[1447]: time="2024-09-04T17:58:30.641015150Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:58:30.642446 containerd[1447]: time="2024-09-04T17:58:30.641178099Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:58:30.642446 containerd[1447]: time="2024-09-04T17:58:30.641316619Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:58:30.642446 containerd[1447]: time="2024-09-04T17:58:30.641554194Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:58:30.693490 systemd[1]: Started cri-containerd-3660227b468104fd4a9fe2a67ff27f67c8e6a487b4d38f657caa4d321f14444c.scope - libcontainer container 3660227b468104fd4a9fe2a67ff27f67c8e6a487b4d38f657caa4d321f14444c. Sep 4 17:58:30.738981 containerd[1447]: time="2024-09-04T17:58:30.738920727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-798cd9c48-zp2dz,Uid:d5415bb8-100d-4692-b66d-5875d35f5aef,Namespace:calico-system,Attempt:1,} returns sandbox id \"3660227b468104fd4a9fe2a67ff27f67c8e6a487b4d38f657caa4d321f14444c\"" Sep 4 17:58:30.766397 containerd[1447]: time="2024-09-04T17:58:30.741720511Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\"" Sep 4 17:58:30.789283 containerd[1447]: time="2024-09-04T17:58:30.788642053Z" level=info msg="StopPodSandbox for \"8956f21a2a02624f57561bba949223b27ce2149177fc4dcf847ea01dd454ea14\"" Sep 4 17:58:30.904585 systemd-networkd[1366]: cali81dd78d8d2b: Link UP Sep 4 17:58:30.908955 systemd-networkd[1366]: cali81dd78d8d2b: Gained carrier Sep 4 17:58:30.945073 containerd[1447]: 2024-09-04 17:58:30.129 [INFO][4096] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4054--1--0--2--9cde805234.novalocal-k8s-csi--node--driver--287hk-eth0 csi-node-driver- calico-system f0db1dfa-f33e-43bf-98b0-16a182e9f9f9 717 0 2024-09-04 17:57:58 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65cb9bb8f4 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ci-4054-1-0-2-9cde805234.novalocal csi-node-driver-287hk eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali81dd78d8d2b [] []}} ContainerID="9180b8a03057dbdc80f9efa8756144d5ca7e2125680c05456d463d3a780ffa3c" Namespace="calico-system" Pod="csi-node-driver-287hk" WorkloadEndpoint="ci--4054--1--0--2--9cde805234.novalocal-k8s-csi--node--driver--287hk-" Sep 4 17:58:30.945073 containerd[1447]: 2024-09-04 17:58:30.129 [INFO][4096] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9180b8a03057dbdc80f9efa8756144d5ca7e2125680c05456d463d3a780ffa3c" Namespace="calico-system" Pod="csi-node-driver-287hk" WorkloadEndpoint="ci--4054--1--0--2--9cde805234.novalocal-k8s-csi--node--driver--287hk-eth0" Sep 4 17:58:30.945073 containerd[1447]: 2024-09-04 17:58:30.190 [INFO][4111] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9180b8a03057dbdc80f9efa8756144d5ca7e2125680c05456d463d3a780ffa3c" HandleID="k8s-pod-network.9180b8a03057dbdc80f9efa8756144d5ca7e2125680c05456d463d3a780ffa3c" Workload="ci--4054--1--0--2--9cde805234.novalocal-k8s-csi--node--driver--287hk-eth0" Sep 4 17:58:30.945073 containerd[1447]: 2024-09-04 17:58:30.212 [INFO][4111] ipam_plugin.go 270: Auto assigning IP ContainerID="9180b8a03057dbdc80f9efa8756144d5ca7e2125680c05456d463d3a780ffa3c" HandleID="k8s-pod-network.9180b8a03057dbdc80f9efa8756144d5ca7e2125680c05456d463d3a780ffa3c" Workload="ci--4054--1--0--2--9cde805234.novalocal-k8s-csi--node--driver--287hk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000378200), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4054-1-0-2-9cde805234.novalocal", "pod":"csi-node-driver-287hk", "timestamp":"2024-09-04 17:58:30.190896091 +0000 UTC"}, Hostname:"ci-4054-1-0-2-9cde805234.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:58:30.945073 containerd[1447]: 2024-09-04 17:58:30.212 [INFO][4111] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:58:30.945073 containerd[1447]: 2024-09-04 17:58:30.334 [INFO][4111] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:58:30.945073 containerd[1447]: 2024-09-04 17:58:30.334 [INFO][4111] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4054-1-0-2-9cde805234.novalocal' Sep 4 17:58:30.945073 containerd[1447]: 2024-09-04 17:58:30.592 [INFO][4111] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9180b8a03057dbdc80f9efa8756144d5ca7e2125680c05456d463d3a780ffa3c" host="ci-4054-1-0-2-9cde805234.novalocal" Sep 4 17:58:30.945073 containerd[1447]: 2024-09-04 17:58:30.804 [INFO][4111] ipam.go 372: Looking up existing affinities for host host="ci-4054-1-0-2-9cde805234.novalocal" Sep 4 17:58:30.945073 containerd[1447]: 2024-09-04 17:58:30.839 [INFO][4111] ipam.go 489: Trying affinity for 192.168.50.192/26 host="ci-4054-1-0-2-9cde805234.novalocal" Sep 4 17:58:30.945073 containerd[1447]: 2024-09-04 17:58:30.850 [INFO][4111] ipam.go 155: Attempting to load block cidr=192.168.50.192/26 host="ci-4054-1-0-2-9cde805234.novalocal" Sep 4 17:58:30.945073 containerd[1447]: 2024-09-04 17:58:30.857 [INFO][4111] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.50.192/26 host="ci-4054-1-0-2-9cde805234.novalocal" Sep 4 17:58:30.945073 containerd[1447]: 2024-09-04 17:58:30.858 [INFO][4111] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.50.192/26 handle="k8s-pod-network.9180b8a03057dbdc80f9efa8756144d5ca7e2125680c05456d463d3a780ffa3c" host="ci-4054-1-0-2-9cde805234.novalocal" Sep 4 17:58:30.945073 containerd[1447]: 2024-09-04 17:58:30.867 [INFO][4111] ipam.go 1685: Creating new handle: k8s-pod-network.9180b8a03057dbdc80f9efa8756144d5ca7e2125680c05456d463d3a780ffa3c Sep 4 17:58:30.945073 containerd[1447]: 2024-09-04 17:58:30.875 [INFO][4111] ipam.go 1203: Writing block in order to claim IPs block=192.168.50.192/26 handle="k8s-pod-network.9180b8a03057dbdc80f9efa8756144d5ca7e2125680c05456d463d3a780ffa3c" host="ci-4054-1-0-2-9cde805234.novalocal" Sep 4 17:58:30.945073 containerd[1447]: 2024-09-04 17:58:30.888 [INFO][4111] ipam.go 1216: Successfully claimed IPs: [192.168.50.195/26] block=192.168.50.192/26 handle="k8s-pod-network.9180b8a03057dbdc80f9efa8756144d5ca7e2125680c05456d463d3a780ffa3c" host="ci-4054-1-0-2-9cde805234.novalocal" Sep 4 17:58:30.945073 containerd[1447]: 2024-09-04 17:58:30.888 [INFO][4111] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.50.195/26] handle="k8s-pod-network.9180b8a03057dbdc80f9efa8756144d5ca7e2125680c05456d463d3a780ffa3c" host="ci-4054-1-0-2-9cde805234.novalocal" Sep 4 17:58:30.945073 containerd[1447]: 2024-09-04 17:58:30.888 [INFO][4111] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:58:30.945073 containerd[1447]: 2024-09-04 17:58:30.888 [INFO][4111] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.50.195/26] IPv6=[] ContainerID="9180b8a03057dbdc80f9efa8756144d5ca7e2125680c05456d463d3a780ffa3c" HandleID="k8s-pod-network.9180b8a03057dbdc80f9efa8756144d5ca7e2125680c05456d463d3a780ffa3c" Workload="ci--4054--1--0--2--9cde805234.novalocal-k8s-csi--node--driver--287hk-eth0" Sep 4 17:58:30.946591 containerd[1447]: 2024-09-04 17:58:30.895 [INFO][4096] k8s.go 386: Populated endpoint ContainerID="9180b8a03057dbdc80f9efa8756144d5ca7e2125680c05456d463d3a780ffa3c" Namespace="calico-system" Pod="csi-node-driver-287hk" WorkloadEndpoint="ci--4054--1--0--2--9cde805234.novalocal-k8s-csi--node--driver--287hk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054--1--0--2--9cde805234.novalocal-k8s-csi--node--driver--287hk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f0db1dfa-f33e-43bf-98b0-16a182e9f9f9", ResourceVersion:"717", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 57, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65cb9bb8f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054-1-0-2-9cde805234.novalocal", ContainerID:"", Pod:"csi-node-driver-287hk", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.50.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali81dd78d8d2b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:58:30.946591 containerd[1447]: 2024-09-04 17:58:30.895 [INFO][4096] k8s.go 387: Calico CNI using IPs: [192.168.50.195/32] ContainerID="9180b8a03057dbdc80f9efa8756144d5ca7e2125680c05456d463d3a780ffa3c" Namespace="calico-system" Pod="csi-node-driver-287hk" WorkloadEndpoint="ci--4054--1--0--2--9cde805234.novalocal-k8s-csi--node--driver--287hk-eth0" Sep 4 17:58:30.946591 containerd[1447]: 2024-09-04 17:58:30.895 [INFO][4096] dataplane_linux.go 68: Setting the host side veth name to cali81dd78d8d2b ContainerID="9180b8a03057dbdc80f9efa8756144d5ca7e2125680c05456d463d3a780ffa3c" Namespace="calico-system" Pod="csi-node-driver-287hk" WorkloadEndpoint="ci--4054--1--0--2--9cde805234.novalocal-k8s-csi--node--driver--287hk-eth0" Sep 4 17:58:30.946591 containerd[1447]: 2024-09-04 17:58:30.910 [INFO][4096] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="9180b8a03057dbdc80f9efa8756144d5ca7e2125680c05456d463d3a780ffa3c" Namespace="calico-system" Pod="csi-node-driver-287hk" WorkloadEndpoint="ci--4054--1--0--2--9cde805234.novalocal-k8s-csi--node--driver--287hk-eth0" Sep 4 17:58:30.946591 containerd[1447]: 2024-09-04 17:58:30.913 [INFO][4096] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9180b8a03057dbdc80f9efa8756144d5ca7e2125680c05456d463d3a780ffa3c" Namespace="calico-system" Pod="csi-node-driver-287hk" WorkloadEndpoint="ci--4054--1--0--2--9cde805234.novalocal-k8s-csi--node--driver--287hk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054--1--0--2--9cde805234.novalocal-k8s-csi--node--driver--287hk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f0db1dfa-f33e-43bf-98b0-16a182e9f9f9", ResourceVersion:"717", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 57, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65cb9bb8f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054-1-0-2-9cde805234.novalocal", ContainerID:"9180b8a03057dbdc80f9efa8756144d5ca7e2125680c05456d463d3a780ffa3c", Pod:"csi-node-driver-287hk", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.50.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali81dd78d8d2b", MAC:"8e:ef:12:0b:83:62", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:58:30.946591 containerd[1447]: 2024-09-04 17:58:30.937 [INFO][4096] k8s.go 500: Wrote updated endpoint to datastore ContainerID="9180b8a03057dbdc80f9efa8756144d5ca7e2125680c05456d463d3a780ffa3c" Namespace="calico-system" Pod="csi-node-driver-287hk" WorkloadEndpoint="ci--4054--1--0--2--9cde805234.novalocal-k8s-csi--node--driver--287hk-eth0" Sep 4 17:58:31.003771 containerd[1447]: time="2024-09-04T17:58:31.002632074Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:58:31.003771 containerd[1447]: time="2024-09-04T17:58:31.002761096Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:58:31.003771 containerd[1447]: time="2024-09-04T17:58:31.002785554Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:58:31.003771 containerd[1447]: time="2024-09-04T17:58:31.002971758Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:58:31.025869 containerd[1447]: 2024-09-04 17:58:30.927 [INFO][4188] k8s.go 608: Cleaning up netns ContainerID="8956f21a2a02624f57561bba949223b27ce2149177fc4dcf847ea01dd454ea14" Sep 4 17:58:31.025869 containerd[1447]: 2024-09-04 17:58:30.928 [INFO][4188] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="8956f21a2a02624f57561bba949223b27ce2149177fc4dcf847ea01dd454ea14" iface="eth0" netns="/var/run/netns/cni-1ff164e5-1368-759d-95d6-92a5c5fda3f3" Sep 4 17:58:31.025869 containerd[1447]: 2024-09-04 17:58:30.929 [INFO][4188] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="8956f21a2a02624f57561bba949223b27ce2149177fc4dcf847ea01dd454ea14" iface="eth0" netns="/var/run/netns/cni-1ff164e5-1368-759d-95d6-92a5c5fda3f3" Sep 4 17:58:31.025869 containerd[1447]: 2024-09-04 17:58:30.929 [INFO][4188] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="8956f21a2a02624f57561bba949223b27ce2149177fc4dcf847ea01dd454ea14" iface="eth0" netns="/var/run/netns/cni-1ff164e5-1368-759d-95d6-92a5c5fda3f3" Sep 4 17:58:31.025869 containerd[1447]: 2024-09-04 17:58:30.929 [INFO][4188] k8s.go 615: Releasing IP address(es) ContainerID="8956f21a2a02624f57561bba949223b27ce2149177fc4dcf847ea01dd454ea14" Sep 4 17:58:31.025869 containerd[1447]: 2024-09-04 17:58:30.929 [INFO][4188] utils.go 188: Calico CNI releasing IP address ContainerID="8956f21a2a02624f57561bba949223b27ce2149177fc4dcf847ea01dd454ea14" Sep 4 17:58:31.025869 containerd[1447]: 2024-09-04 17:58:30.985 [INFO][4204] ipam_plugin.go 417: Releasing address using handleID ContainerID="8956f21a2a02624f57561bba949223b27ce2149177fc4dcf847ea01dd454ea14" HandleID="k8s-pod-network.8956f21a2a02624f57561bba949223b27ce2149177fc4dcf847ea01dd454ea14" Workload="ci--4054--1--0--2--9cde805234.novalocal-k8s-coredns--7db6d8ff4d--zzgpk-eth0" Sep 4 17:58:31.025869 containerd[1447]: 2024-09-04 17:58:30.985 [INFO][4204] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:58:31.025869 containerd[1447]: 2024-09-04 17:58:30.985 [INFO][4204] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:58:31.025869 containerd[1447]: 2024-09-04 17:58:31.007 [WARNING][4204] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="8956f21a2a02624f57561bba949223b27ce2149177fc4dcf847ea01dd454ea14" HandleID="k8s-pod-network.8956f21a2a02624f57561bba949223b27ce2149177fc4dcf847ea01dd454ea14" Workload="ci--4054--1--0--2--9cde805234.novalocal-k8s-coredns--7db6d8ff4d--zzgpk-eth0" Sep 4 17:58:31.025869 containerd[1447]: 2024-09-04 17:58:31.007 [INFO][4204] ipam_plugin.go 445: Releasing address using workloadID ContainerID="8956f21a2a02624f57561bba949223b27ce2149177fc4dcf847ea01dd454ea14" HandleID="k8s-pod-network.8956f21a2a02624f57561bba949223b27ce2149177fc4dcf847ea01dd454ea14" Workload="ci--4054--1--0--2--9cde805234.novalocal-k8s-coredns--7db6d8ff4d--zzgpk-eth0" Sep 4 17:58:31.025869 containerd[1447]: 2024-09-04 17:58:31.013 [INFO][4204] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:58:31.025869 containerd[1447]: 2024-09-04 17:58:31.022 [INFO][4188] k8s.go 621: Teardown processing complete. ContainerID="8956f21a2a02624f57561bba949223b27ce2149177fc4dcf847ea01dd454ea14" Sep 4 17:58:31.030612 containerd[1447]: time="2024-09-04T17:58:31.030499283Z" level=info msg="TearDown network for sandbox \"8956f21a2a02624f57561bba949223b27ce2149177fc4dcf847ea01dd454ea14\" successfully" Sep 4 17:58:31.030726 containerd[1447]: time="2024-09-04T17:58:31.030709094Z" level=info msg="StopPodSandbox for \"8956f21a2a02624f57561bba949223b27ce2149177fc4dcf847ea01dd454ea14\" returns successfully" Sep 4 17:58:31.033823 containerd[1447]: time="2024-09-04T17:58:31.033783111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zzgpk,Uid:57139a4b-d4db-400d-8527-bd5a87379b62,Namespace:kube-system,Attempt:1,}" Sep 4 17:58:31.034198 systemd[1]: run-netns-cni\x2d1ff164e5\x2d1368\x2d759d\x2d95d6\x2d92a5c5fda3f3.mount: Deactivated successfully. Sep 4 17:58:31.050535 systemd[1]: Started cri-containerd-9180b8a03057dbdc80f9efa8756144d5ca7e2125680c05456d463d3a780ffa3c.scope - libcontainer container 9180b8a03057dbdc80f9efa8756144d5ca7e2125680c05456d463d3a780ffa3c. Sep 4 17:58:31.133920 containerd[1447]: time="2024-09-04T17:58:31.133865434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-287hk,Uid:f0db1dfa-f33e-43bf-98b0-16a182e9f9f9,Namespace:calico-system,Attempt:1,} returns sandbox id \"9180b8a03057dbdc80f9efa8756144d5ca7e2125680c05456d463d3a780ffa3c\"" Sep 4 17:58:31.299446 systemd-networkd[1366]: califa290331bc1: Link UP Sep 4 17:58:31.300996 systemd-networkd[1366]: califa290331bc1: Gained carrier Sep 4 17:58:31.327535 containerd[1447]: 2024-09-04 17:58:31.154 [INFO][4258] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4054--1--0--2--9cde805234.novalocal-k8s-coredns--7db6d8ff4d--zzgpk-eth0 coredns-7db6d8ff4d- kube-system 57139a4b-d4db-400d-8527-bd5a87379b62 733 0 2024-09-04 17:57:51 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4054-1-0-2-9cde805234.novalocal coredns-7db6d8ff4d-zzgpk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califa290331bc1 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="bc4a4df0e5d2094e9c19b695f1bf232efd49e1899ca834ee641e7a4b0182803a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zzgpk" WorkloadEndpoint="ci--4054--1--0--2--9cde805234.novalocal-k8s-coredns--7db6d8ff4d--zzgpk-" Sep 4 17:58:31.327535 containerd[1447]: 2024-09-04 17:58:31.154 [INFO][4258] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="bc4a4df0e5d2094e9c19b695f1bf232efd49e1899ca834ee641e7a4b0182803a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zzgpk" WorkloadEndpoint="ci--4054--1--0--2--9cde805234.novalocal-k8s-coredns--7db6d8ff4d--zzgpk-eth0" Sep 4 17:58:31.327535 containerd[1447]: 2024-09-04 17:58:31.214 [INFO][4276] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bc4a4df0e5d2094e9c19b695f1bf232efd49e1899ca834ee641e7a4b0182803a" HandleID="k8s-pod-network.bc4a4df0e5d2094e9c19b695f1bf232efd49e1899ca834ee641e7a4b0182803a" Workload="ci--4054--1--0--2--9cde805234.novalocal-k8s-coredns--7db6d8ff4d--zzgpk-eth0" Sep 4 17:58:31.327535 containerd[1447]: 2024-09-04 17:58:31.240 [INFO][4276] ipam_plugin.go 270: Auto assigning IP ContainerID="bc4a4df0e5d2094e9c19b695f1bf232efd49e1899ca834ee641e7a4b0182803a" HandleID="k8s-pod-network.bc4a4df0e5d2094e9c19b695f1bf232efd49e1899ca834ee641e7a4b0182803a" Workload="ci--4054--1--0--2--9cde805234.novalocal-k8s-coredns--7db6d8ff4d--zzgpk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318c80), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4054-1-0-2-9cde805234.novalocal", "pod":"coredns-7db6d8ff4d-zzgpk", "timestamp":"2024-09-04 17:58:31.214408342 +0000 UTC"}, Hostname:"ci-4054-1-0-2-9cde805234.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:58:31.327535 containerd[1447]: 2024-09-04 17:58:31.240 [INFO][4276] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:58:31.327535 containerd[1447]: 2024-09-04 17:58:31.240 [INFO][4276] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:58:31.327535 containerd[1447]: 2024-09-04 17:58:31.240 [INFO][4276] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4054-1-0-2-9cde805234.novalocal' Sep 4 17:58:31.327535 containerd[1447]: 2024-09-04 17:58:31.243 [INFO][4276] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.bc4a4df0e5d2094e9c19b695f1bf232efd49e1899ca834ee641e7a4b0182803a" host="ci-4054-1-0-2-9cde805234.novalocal" Sep 4 17:58:31.327535 containerd[1447]: 2024-09-04 17:58:31.248 [INFO][4276] ipam.go 372: Looking up existing affinities for host host="ci-4054-1-0-2-9cde805234.novalocal" Sep 4 17:58:31.327535 containerd[1447]: 2024-09-04 17:58:31.254 [INFO][4276] ipam.go 489: Trying affinity for 192.168.50.192/26 host="ci-4054-1-0-2-9cde805234.novalocal" Sep 4 17:58:31.327535 containerd[1447]: 2024-09-04 17:58:31.256 [INFO][4276] ipam.go 155: Attempting to load block cidr=192.168.50.192/26 host="ci-4054-1-0-2-9cde805234.novalocal" Sep 4 17:58:31.327535 containerd[1447]: 2024-09-04 17:58:31.260 [INFO][4276] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.50.192/26 host="ci-4054-1-0-2-9cde805234.novalocal" Sep 4 17:58:31.327535 containerd[1447]: 2024-09-04 17:58:31.262 [INFO][4276] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.50.192/26 handle="k8s-pod-network.bc4a4df0e5d2094e9c19b695f1bf232efd49e1899ca834ee641e7a4b0182803a" host="ci-4054-1-0-2-9cde805234.novalocal" Sep 4 17:58:31.327535 containerd[1447]: 2024-09-04 17:58:31.271 [INFO][4276] ipam.go 1685: Creating new handle: k8s-pod-network.bc4a4df0e5d2094e9c19b695f1bf232efd49e1899ca834ee641e7a4b0182803a Sep 4 17:58:31.327535 containerd[1447]: 2024-09-04 17:58:31.276 [INFO][4276] ipam.go 1203: Writing block in order to claim IPs block=192.168.50.192/26 handle="k8s-pod-network.bc4a4df0e5d2094e9c19b695f1bf232efd49e1899ca834ee641e7a4b0182803a" host="ci-4054-1-0-2-9cde805234.novalocal" Sep 4 17:58:31.327535 containerd[1447]: 2024-09-04 17:58:31.286 [INFO][4276] ipam.go 1216: Successfully claimed IPs: [192.168.50.196/26] block=192.168.50.192/26 handle="k8s-pod-network.bc4a4df0e5d2094e9c19b695f1bf232efd49e1899ca834ee641e7a4b0182803a" host="ci-4054-1-0-2-9cde805234.novalocal" Sep 4 17:58:31.327535 containerd[1447]: 2024-09-04 17:58:31.286 [INFO][4276] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.50.196/26] handle="k8s-pod-network.bc4a4df0e5d2094e9c19b695f1bf232efd49e1899ca834ee641e7a4b0182803a" host="ci-4054-1-0-2-9cde805234.novalocal" Sep 4 17:58:31.327535 containerd[1447]: 2024-09-04 17:58:31.286 [INFO][4276] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:58:31.327535 containerd[1447]: 2024-09-04 17:58:31.286 [INFO][4276] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.50.196/26] IPv6=[] ContainerID="bc4a4df0e5d2094e9c19b695f1bf232efd49e1899ca834ee641e7a4b0182803a" HandleID="k8s-pod-network.bc4a4df0e5d2094e9c19b695f1bf232efd49e1899ca834ee641e7a4b0182803a" Workload="ci--4054--1--0--2--9cde805234.novalocal-k8s-coredns--7db6d8ff4d--zzgpk-eth0" Sep 4 17:58:31.328280 containerd[1447]: 2024-09-04 17:58:31.290 [INFO][4258] k8s.go 386: Populated endpoint ContainerID="bc4a4df0e5d2094e9c19b695f1bf232efd49e1899ca834ee641e7a4b0182803a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zzgpk" WorkloadEndpoint="ci--4054--1--0--2--9cde805234.novalocal-k8s-coredns--7db6d8ff4d--zzgpk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054--1--0--2--9cde805234.novalocal-k8s-coredns--7db6d8ff4d--zzgpk-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"57139a4b-d4db-400d-8527-bd5a87379b62", ResourceVersion:"733", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 57, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054-1-0-2-9cde805234.novalocal", ContainerID:"", Pod:"coredns-7db6d8ff4d-zzgpk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califa290331bc1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:58:31.328280 containerd[1447]: 2024-09-04 17:58:31.290 [INFO][4258] k8s.go 387: Calico CNI using IPs: [192.168.50.196/32] ContainerID="bc4a4df0e5d2094e9c19b695f1bf232efd49e1899ca834ee641e7a4b0182803a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zzgpk" WorkloadEndpoint="ci--4054--1--0--2--9cde805234.novalocal-k8s-coredns--7db6d8ff4d--zzgpk-eth0" Sep 4 17:58:31.328280 containerd[1447]: 2024-09-04 17:58:31.290 [INFO][4258] dataplane_linux.go 68: Setting the host side veth name to califa290331bc1 ContainerID="bc4a4df0e5d2094e9c19b695f1bf232efd49e1899ca834ee641e7a4b0182803a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zzgpk" WorkloadEndpoint="ci--4054--1--0--2--9cde805234.novalocal-k8s-coredns--7db6d8ff4d--zzgpk-eth0" Sep 4 17:58:31.328280 containerd[1447]: 2024-09-04 17:58:31.299 [INFO][4258] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="bc4a4df0e5d2094e9c19b695f1bf232efd49e1899ca834ee641e7a4b0182803a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zzgpk" WorkloadEndpoint="ci--4054--1--0--2--9cde805234.novalocal-k8s-coredns--7db6d8ff4d--zzgpk-eth0" Sep 4 17:58:31.328280 containerd[1447]: 2024-09-04 17:58:31.301 [INFO][4258] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="bc4a4df0e5d2094e9c19b695f1bf232efd49e1899ca834ee641e7a4b0182803a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zzgpk" WorkloadEndpoint="ci--4054--1--0--2--9cde805234.novalocal-k8s-coredns--7db6d8ff4d--zzgpk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054--1--0--2--9cde805234.novalocal-k8s-coredns--7db6d8ff4d--zzgpk-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"57139a4b-d4db-400d-8527-bd5a87379b62", ResourceVersion:"733", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 57, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054-1-0-2-9cde805234.novalocal", ContainerID:"bc4a4df0e5d2094e9c19b695f1bf232efd49e1899ca834ee641e7a4b0182803a", Pod:"coredns-7db6d8ff4d-zzgpk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califa290331bc1", MAC:"06:ca:49:02:fa:11", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:58:31.328280 containerd[1447]: 2024-09-04 17:58:31.323 [INFO][4258] k8s.go 500: Wrote updated endpoint to datastore ContainerID="bc4a4df0e5d2094e9c19b695f1bf232efd49e1899ca834ee641e7a4b0182803a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zzgpk" WorkloadEndpoint="ci--4054--1--0--2--9cde805234.novalocal-k8s-coredns--7db6d8ff4d--zzgpk-eth0" Sep 4 17:58:31.373997 containerd[1447]: time="2024-09-04T17:58:31.373623955Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:58:31.373997 containerd[1447]: time="2024-09-04T17:58:31.373703691Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:58:31.373997 containerd[1447]: time="2024-09-04T17:58:31.373723169Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:58:31.373997 containerd[1447]: time="2024-09-04T17:58:31.373816852Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:58:31.419876 systemd[1]: Started cri-containerd-bc4a4df0e5d2094e9c19b695f1bf232efd49e1899ca834ee641e7a4b0182803a.scope - libcontainer container bc4a4df0e5d2094e9c19b695f1bf232efd49e1899ca834ee641e7a4b0182803a. Sep 4 17:58:31.499411 containerd[1447]: time="2024-09-04T17:58:31.499364611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zzgpk,Uid:57139a4b-d4db-400d-8527-bd5a87379b62,Namespace:kube-system,Attempt:1,} returns sandbox id \"bc4a4df0e5d2094e9c19b695f1bf232efd49e1899ca834ee641e7a4b0182803a\"" Sep 4 17:58:31.504740 containerd[1447]: time="2024-09-04T17:58:31.504680058Z" level=info msg="CreateContainer within sandbox \"bc4a4df0e5d2094e9c19b695f1bf232efd49e1899ca834ee641e7a4b0182803a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 17:58:31.529848 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount302018250.mount: Deactivated successfully. Sep 4 17:58:31.543615 containerd[1447]: time="2024-09-04T17:58:31.543105170Z" level=info msg="CreateContainer within sandbox \"bc4a4df0e5d2094e9c19b695f1bf232efd49e1899ca834ee641e7a4b0182803a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"96c9a0a28b06bc85b5a413750b33704198f365b038a11f623c3352e31c039b72\"" Sep 4 17:58:31.545367 containerd[1447]: time="2024-09-04T17:58:31.543957234Z" level=info msg="StartContainer for \"96c9a0a28b06bc85b5a413750b33704198f365b038a11f623c3352e31c039b72\"" Sep 4 17:58:31.580490 systemd[1]: Started cri-containerd-96c9a0a28b06bc85b5a413750b33704198f365b038a11f623c3352e31c039b72.scope - libcontainer container 96c9a0a28b06bc85b5a413750b33704198f365b038a11f623c3352e31c039b72. Sep 4 17:58:31.630130 containerd[1447]: time="2024-09-04T17:58:31.630062203Z" level=info msg="StartContainer for \"96c9a0a28b06bc85b5a413750b33704198f365b038a11f623c3352e31c039b72\" returns successfully" Sep 4 17:58:31.817934 systemd-networkd[1366]: cali5aa53c460eb: Gained IPv6LL Sep 4 17:58:32.457455 systemd-networkd[1366]: califa290331bc1: Gained IPv6LL Sep 4 17:58:32.777508 systemd-networkd[1366]: cali81dd78d8d2b: Gained IPv6LL Sep 4 17:58:33.217617 kubelet[2661]: I0904 17:58:33.217357 2661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-zzgpk" podStartSLOduration=42.217333361 podStartE2EDuration="42.217333361s" podCreationTimestamp="2024-09-04 17:57:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:58:32.192760408 +0000 UTC m=+55.574670725" watchObservedRunningTime="2024-09-04 17:58:33.217333361 +0000 UTC m=+56.599243628" Sep 4 17:58:34.063968 containerd[1447]: time="2024-09-04T17:58:34.063881187Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:58:34.068274 containerd[1447]: time="2024-09-04T17:58:34.067416018Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.1: active requests=0, bytes read=33507125" Sep 4 17:58:34.068628 containerd[1447]: time="2024-09-04T17:58:34.068592735Z" level=info msg="ImageCreate event name:\"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:58:34.073623 containerd[1447]: time="2024-09-04T17:58:34.073580672Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:58:34.077440 containerd[1447]: time="2024-09-04T17:58:34.077402853Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" with image id \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\", size \"34999494\" in 3.33556812s" Sep 4 17:58:34.077522 containerd[1447]: time="2024-09-04T17:58:34.077452480Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" returns image reference \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\"" Sep 4 17:58:34.078625 containerd[1447]: time="2024-09-04T17:58:34.078597235Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\"" Sep 4 17:58:34.140293 containerd[1447]: time="2024-09-04T17:58:34.139289344Z" level=info msg="CreateContainer within sandbox \"3660227b468104fd4a9fe2a67ff27f67c8e6a487b4d38f657caa4d321f14444c\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 4 17:58:34.202984 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2094514693.mount: Deactivated successfully. Sep 4 17:58:34.230804 containerd[1447]: time="2024-09-04T17:58:34.230660362Z" level=info msg="CreateContainer within sandbox \"3660227b468104fd4a9fe2a67ff27f67c8e6a487b4d38f657caa4d321f14444c\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"6675665c9257cb85b92f5a4c2344e7d8623f2c4a8d17c12253e004a713109850\"" Sep 4 17:58:34.232723 containerd[1447]: time="2024-09-04T17:58:34.232683381Z" level=info msg="StartContainer for \"6675665c9257cb85b92f5a4c2344e7d8623f2c4a8d17c12253e004a713109850\"" Sep 4 17:58:34.284189 systemd[1]: Started cri-containerd-6675665c9257cb85b92f5a4c2344e7d8623f2c4a8d17c12253e004a713109850.scope - libcontainer container 6675665c9257cb85b92f5a4c2344e7d8623f2c4a8d17c12253e004a713109850. Sep 4 17:58:34.360364 containerd[1447]: time="2024-09-04T17:58:34.359729743Z" level=info msg="StartContainer for \"6675665c9257cb85b92f5a4c2344e7d8623f2c4a8d17c12253e004a713109850\" returns successfully" Sep 4 17:58:35.366056 kubelet[2661]: I0904 17:58:35.365555 2661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-798cd9c48-zp2dz" podStartSLOduration=34.027931009 podStartE2EDuration="37.36553196s" podCreationTimestamp="2024-09-04 17:57:58 +0000 UTC" firstStartedPulling="2024-09-04 17:58:30.740629007 +0000 UTC m=+54.122539274" lastFinishedPulling="2024-09-04 17:58:34.078229958 +0000 UTC m=+57.460140225" observedRunningTime="2024-09-04 17:58:35.243019986 +0000 UTC m=+58.624930283" watchObservedRunningTime="2024-09-04 17:58:35.36553196 +0000 UTC m=+58.747442228" Sep 4 17:58:36.642975 containerd[1447]: time="2024-09-04T17:58:36.637970687Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:58:36.642975 containerd[1447]: time="2024-09-04T17:58:36.640380674Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.1: active requests=0, bytes read=7642081" Sep 4 17:58:36.645080 containerd[1447]: time="2024-09-04T17:58:36.643990845Z" level=info msg="ImageCreate event name:\"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:58:36.645180 containerd[1447]: time="2024-09-04T17:58:36.645140458Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:58:36.645847 containerd[1447]: time="2024-09-04T17:58:36.645790426Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.1\" with image id \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\", size \"9134482\" in 2.567161s" Sep 4 17:58:36.645902 containerd[1447]: time="2024-09-04T17:58:36.645847138Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\" returns image reference \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\"" Sep 4 17:58:36.649426 containerd[1447]: time="2024-09-04T17:58:36.649390008Z" level=info msg="CreateContainer within sandbox \"9180b8a03057dbdc80f9efa8756144d5ca7e2125680c05456d463d3a780ffa3c\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 4 17:58:36.700609 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount342744734.mount: Deactivated successfully. Sep 4 17:58:36.718009 containerd[1447]: time="2024-09-04T17:58:36.717933255Z" level=info msg="CreateContainer within sandbox \"9180b8a03057dbdc80f9efa8756144d5ca7e2125680c05456d463d3a780ffa3c\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"be5ee7ab0f630c6cf92959727a2175175a10caa567186a654a9c7504cece29f4\"" Sep 4 17:58:36.719973 containerd[1447]: time="2024-09-04T17:58:36.718792378Z" level=info msg="StartContainer for \"be5ee7ab0f630c6cf92959727a2175175a10caa567186a654a9c7504cece29f4\"" Sep 4 17:58:36.835504 systemd[1]: run-containerd-runc-k8s.io-be5ee7ab0f630c6cf92959727a2175175a10caa567186a654a9c7504cece29f4-runc.P14qxa.mount: Deactivated successfully. Sep 4 17:58:36.852396 systemd[1]: Started cri-containerd-be5ee7ab0f630c6cf92959727a2175175a10caa567186a654a9c7504cece29f4.scope - libcontainer container be5ee7ab0f630c6cf92959727a2175175a10caa567186a654a9c7504cece29f4. Sep 4 17:58:36.868142 containerd[1447]: time="2024-09-04T17:58:36.868094133Z" level=info msg="StopPodSandbox for \"8956f21a2a02624f57561bba949223b27ce2149177fc4dcf847ea01dd454ea14\"" Sep 4 17:58:36.907271 containerd[1447]: time="2024-09-04T17:58:36.907123147Z" level=info msg="StartContainer for \"be5ee7ab0f630c6cf92959727a2175175a10caa567186a654a9c7504cece29f4\" returns successfully" Sep 4 17:58:36.912902 containerd[1447]: time="2024-09-04T17:58:36.910612672Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\"" Sep 4 17:58:37.012790 containerd[1447]: 2024-09-04 17:58:36.949 [WARNING][4502] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8956f21a2a02624f57561bba949223b27ce2149177fc4dcf847ea01dd454ea14" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054--1--0--2--9cde805234.novalocal-k8s-coredns--7db6d8ff4d--zzgpk-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"57139a4b-d4db-400d-8527-bd5a87379b62", ResourceVersion:"748", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 57, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054-1-0-2-9cde805234.novalocal", ContainerID:"bc4a4df0e5d2094e9c19b695f1bf232efd49e1899ca834ee641e7a4b0182803a", Pod:"coredns-7db6d8ff4d-zzgpk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califa290331bc1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:58:37.012790 containerd[1447]: 2024-09-04 17:58:36.950 [INFO][4502] k8s.go 608: Cleaning up netns ContainerID="8956f21a2a02624f57561bba949223b27ce2149177fc4dcf847ea01dd454ea14" Sep 4 17:58:37.012790 containerd[1447]: 2024-09-04 17:58:36.950 [INFO][4502] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="8956f21a2a02624f57561bba949223b27ce2149177fc4dcf847ea01dd454ea14" iface="eth0" netns="" Sep 4 17:58:37.012790 containerd[1447]: 2024-09-04 17:58:36.950 [INFO][4502] k8s.go 615: Releasing IP address(es) ContainerID="8956f21a2a02624f57561bba949223b27ce2149177fc4dcf847ea01dd454ea14" Sep 4 17:58:37.012790 containerd[1447]: 2024-09-04 17:58:36.950 [INFO][4502] utils.go 188: Calico CNI releasing IP address ContainerID="8956f21a2a02624f57561bba949223b27ce2149177fc4dcf847ea01dd454ea14" Sep 4 17:58:37.012790 containerd[1447]: 2024-09-04 17:58:36.995 [INFO][4513] ipam_plugin.go 417: Releasing address using handleID ContainerID="8956f21a2a02624f57561bba949223b27ce2149177fc4dcf847ea01dd454ea14" HandleID="k8s-pod-network.8956f21a2a02624f57561bba949223b27ce2149177fc4dcf847ea01dd454ea14" Workload="ci--4054--1--0--2--9cde805234.novalocal-k8s-coredns--7db6d8ff4d--zzgpk-eth0" Sep 4 17:58:37.012790 containerd[1447]: 2024-09-04 17:58:36.995 [INFO][4513] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:58:37.012790 containerd[1447]: 2024-09-04 17:58:36.995 [INFO][4513] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:58:37.012790 containerd[1447]: 2024-09-04 17:58:37.004 [WARNING][4513] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="8956f21a2a02624f57561bba949223b27ce2149177fc4dcf847ea01dd454ea14" HandleID="k8s-pod-network.8956f21a2a02624f57561bba949223b27ce2149177fc4dcf847ea01dd454ea14" Workload="ci--4054--1--0--2--9cde805234.novalocal-k8s-coredns--7db6d8ff4d--zzgpk-eth0" Sep 4 17:58:37.012790 containerd[1447]: 2024-09-04 17:58:37.004 [INFO][4513] ipam_plugin.go 445: Releasing address using workloadID ContainerID="8956f21a2a02624f57561bba949223b27ce2149177fc4dcf847ea01dd454ea14" HandleID="k8s-pod-network.8956f21a2a02624f57561bba949223b27ce2149177fc4dcf847ea01dd454ea14" Workload="ci--4054--1--0--2--9cde805234.novalocal-k8s-coredns--7db6d8ff4d--zzgpk-eth0" Sep 4 17:58:37.012790 containerd[1447]: 2024-09-04 17:58:37.008 [INFO][4513] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:58:37.012790 containerd[1447]: 2024-09-04 17:58:37.009 [INFO][4502] k8s.go 621: Teardown processing complete. ContainerID="8956f21a2a02624f57561bba949223b27ce2149177fc4dcf847ea01dd454ea14" Sep 4 17:58:37.014770 containerd[1447]: time="2024-09-04T17:58:37.012816314Z" level=info msg="TearDown network for sandbox \"8956f21a2a02624f57561bba949223b27ce2149177fc4dcf847ea01dd454ea14\" successfully" Sep 4 17:58:37.014770 containerd[1447]: time="2024-09-04T17:58:37.012855615Z" level=info msg="StopPodSandbox for \"8956f21a2a02624f57561bba949223b27ce2149177fc4dcf847ea01dd454ea14\" returns successfully" Sep 4 17:58:37.030045 containerd[1447]: time="2024-09-04T17:58:37.029993731Z" level=info msg="RemovePodSandbox for \"8956f21a2a02624f57561bba949223b27ce2149177fc4dcf847ea01dd454ea14\"" Sep 4 17:58:37.035565 containerd[1447]: time="2024-09-04T17:58:37.035522438Z" level=info msg="Forcibly stopping sandbox \"8956f21a2a02624f57561bba949223b27ce2149177fc4dcf847ea01dd454ea14\"" Sep 4 17:58:37.149306 containerd[1447]: 2024-09-04 17:58:37.095 [WARNING][4531] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8956f21a2a02624f57561bba949223b27ce2149177fc4dcf847ea01dd454ea14" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054--1--0--2--9cde805234.novalocal-k8s-coredns--7db6d8ff4d--zzgpk-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"57139a4b-d4db-400d-8527-bd5a87379b62", ResourceVersion:"748", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 57, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054-1-0-2-9cde805234.novalocal", ContainerID:"bc4a4df0e5d2094e9c19b695f1bf232efd49e1899ca834ee641e7a4b0182803a", Pod:"coredns-7db6d8ff4d-zzgpk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califa290331bc1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:58:37.149306 containerd[1447]: 2024-09-04 17:58:37.095 [INFO][4531] k8s.go 608: Cleaning up netns ContainerID="8956f21a2a02624f57561bba949223b27ce2149177fc4dcf847ea01dd454ea14" Sep 4 17:58:37.149306 containerd[1447]: 2024-09-04 17:58:37.095 [INFO][4531] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="8956f21a2a02624f57561bba949223b27ce2149177fc4dcf847ea01dd454ea14" iface="eth0" netns="" Sep 4 17:58:37.149306 containerd[1447]: 2024-09-04 17:58:37.095 [INFO][4531] k8s.go 615: Releasing IP address(es) ContainerID="8956f21a2a02624f57561bba949223b27ce2149177fc4dcf847ea01dd454ea14" Sep 4 17:58:37.149306 containerd[1447]: 2024-09-04 17:58:37.095 [INFO][4531] utils.go 188: Calico CNI releasing IP address ContainerID="8956f21a2a02624f57561bba949223b27ce2149177fc4dcf847ea01dd454ea14" Sep 4 17:58:37.149306 containerd[1447]: 2024-09-04 17:58:37.134 [INFO][4537] ipam_plugin.go 417: Releasing address using handleID ContainerID="8956f21a2a02624f57561bba949223b27ce2149177fc4dcf847ea01dd454ea14" HandleID="k8s-pod-network.8956f21a2a02624f57561bba949223b27ce2149177fc4dcf847ea01dd454ea14" Workload="ci--4054--1--0--2--9cde805234.novalocal-k8s-coredns--7db6d8ff4d--zzgpk-eth0" Sep 4 17:58:37.149306 containerd[1447]: 2024-09-04 17:58:37.134 [INFO][4537] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:58:37.149306 containerd[1447]: 2024-09-04 17:58:37.134 [INFO][4537] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:58:37.149306 containerd[1447]: 2024-09-04 17:58:37.143 [WARNING][4537] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="8956f21a2a02624f57561bba949223b27ce2149177fc4dcf847ea01dd454ea14" HandleID="k8s-pod-network.8956f21a2a02624f57561bba949223b27ce2149177fc4dcf847ea01dd454ea14" Workload="ci--4054--1--0--2--9cde805234.novalocal-k8s-coredns--7db6d8ff4d--zzgpk-eth0" Sep 4 17:58:37.149306 containerd[1447]: 2024-09-04 17:58:37.143 [INFO][4537] ipam_plugin.go 445: Releasing address using workloadID ContainerID="8956f21a2a02624f57561bba949223b27ce2149177fc4dcf847ea01dd454ea14" HandleID="k8s-pod-network.8956f21a2a02624f57561bba949223b27ce2149177fc4dcf847ea01dd454ea14" Workload="ci--4054--1--0--2--9cde805234.novalocal-k8s-coredns--7db6d8ff4d--zzgpk-eth0" Sep 4 17:58:37.149306 containerd[1447]: 2024-09-04 17:58:37.145 [INFO][4537] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:58:37.149306 containerd[1447]: 2024-09-04 17:58:37.147 [INFO][4531] k8s.go 621: Teardown processing complete. ContainerID="8956f21a2a02624f57561bba949223b27ce2149177fc4dcf847ea01dd454ea14" Sep 4 17:58:37.149913 containerd[1447]: time="2024-09-04T17:58:37.149359633Z" level=info msg="TearDown network for sandbox \"8956f21a2a02624f57561bba949223b27ce2149177fc4dcf847ea01dd454ea14\" successfully" Sep 4 17:58:37.186776 containerd[1447]: time="2024-09-04T17:58:37.186639290Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8956f21a2a02624f57561bba949223b27ce2149177fc4dcf847ea01dd454ea14\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:58:37.186776 containerd[1447]: time="2024-09-04T17:58:37.186752293Z" level=info msg="RemovePodSandbox \"8956f21a2a02624f57561bba949223b27ce2149177fc4dcf847ea01dd454ea14\" returns successfully" Sep 4 17:58:37.188231 containerd[1447]: time="2024-09-04T17:58:37.187999106Z" level=info msg="StopPodSandbox for \"cce266914840a1d211b368eb3853f7adc3e200e15098b220fc1d5da1284aec31\"" Sep 4 17:58:37.315251 containerd[1447]: 2024-09-04 17:58:37.261 [WARNING][4555] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cce266914840a1d211b368eb3853f7adc3e200e15098b220fc1d5da1284aec31" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054--1--0--2--9cde805234.novalocal-k8s-calico--kube--controllers--798cd9c48--zp2dz-eth0", GenerateName:"calico-kube-controllers-798cd9c48-", Namespace:"calico-system", SelfLink:"", UID:"d5415bb8-100d-4692-b66d-5875d35f5aef", ResourceVersion:"770", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 57, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"798cd9c48", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054-1-0-2-9cde805234.novalocal", ContainerID:"3660227b468104fd4a9fe2a67ff27f67c8e6a487b4d38f657caa4d321f14444c", Pod:"calico-kube-controllers-798cd9c48-zp2dz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.50.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5aa53c460eb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:58:37.315251 containerd[1447]: 2024-09-04 17:58:37.261 [INFO][4555] k8s.go 608: Cleaning up netns ContainerID="cce266914840a1d211b368eb3853f7adc3e200e15098b220fc1d5da1284aec31" Sep 4 17:58:37.315251 containerd[1447]: 2024-09-04 17:58:37.261 [INFO][4555] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="cce266914840a1d211b368eb3853f7adc3e200e15098b220fc1d5da1284aec31" iface="eth0" netns="" Sep 4 17:58:37.315251 containerd[1447]: 2024-09-04 17:58:37.261 [INFO][4555] k8s.go 615: Releasing IP address(es) ContainerID="cce266914840a1d211b368eb3853f7adc3e200e15098b220fc1d5da1284aec31" Sep 4 17:58:37.315251 containerd[1447]: 2024-09-04 17:58:37.261 [INFO][4555] utils.go 188: Calico CNI releasing IP address ContainerID="cce266914840a1d211b368eb3853f7adc3e200e15098b220fc1d5da1284aec31" Sep 4 17:58:37.315251 containerd[1447]: 2024-09-04 17:58:37.298 [INFO][4561] ipam_plugin.go 417: Releasing address using handleID ContainerID="cce266914840a1d211b368eb3853f7adc3e200e15098b220fc1d5da1284aec31" HandleID="k8s-pod-network.cce266914840a1d211b368eb3853f7adc3e200e15098b220fc1d5da1284aec31" Workload="ci--4054--1--0--2--9cde805234.novalocal-k8s-calico--kube--controllers--798cd9c48--zp2dz-eth0" Sep 4 17:58:37.315251 containerd[1447]: 2024-09-04 17:58:37.298 [INFO][4561] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:58:37.315251 containerd[1447]: 2024-09-04 17:58:37.298 [INFO][4561] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:58:37.315251 containerd[1447]: 2024-09-04 17:58:37.308 [WARNING][4561] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="cce266914840a1d211b368eb3853f7adc3e200e15098b220fc1d5da1284aec31" HandleID="k8s-pod-network.cce266914840a1d211b368eb3853f7adc3e200e15098b220fc1d5da1284aec31" Workload="ci--4054--1--0--2--9cde805234.novalocal-k8s-calico--kube--controllers--798cd9c48--zp2dz-eth0" Sep 4 17:58:37.315251 containerd[1447]: 2024-09-04 17:58:37.308 [INFO][4561] ipam_plugin.go 445: Releasing address using workloadID ContainerID="cce266914840a1d211b368eb3853f7adc3e200e15098b220fc1d5da1284aec31" HandleID="k8s-pod-network.cce266914840a1d211b368eb3853f7adc3e200e15098b220fc1d5da1284aec31" Workload="ci--4054--1--0--2--9cde805234.novalocal-k8s-calico--kube--controllers--798cd9c48--zp2dz-eth0" Sep 4 17:58:37.315251 containerd[1447]: 2024-09-04 17:58:37.310 [INFO][4561] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:58:37.315251 containerd[1447]: 2024-09-04 17:58:37.312 [INFO][4555] k8s.go 621: Teardown processing complete. ContainerID="cce266914840a1d211b368eb3853f7adc3e200e15098b220fc1d5da1284aec31" Sep 4 17:58:37.317159 containerd[1447]: time="2024-09-04T17:58:37.315373708Z" level=info msg="TearDown network for sandbox \"cce266914840a1d211b368eb3853f7adc3e200e15098b220fc1d5da1284aec31\" successfully" Sep 4 17:58:37.317159 containerd[1447]: time="2024-09-04T17:58:37.315685710Z" level=info msg="StopPodSandbox for \"cce266914840a1d211b368eb3853f7adc3e200e15098b220fc1d5da1284aec31\" returns successfully" Sep 4 17:58:37.317159 containerd[1447]: time="2024-09-04T17:58:37.316363279Z" level=info msg="RemovePodSandbox for \"cce266914840a1d211b368eb3853f7adc3e200e15098b220fc1d5da1284aec31\"" Sep 4 17:58:37.317159 containerd[1447]: time="2024-09-04T17:58:37.316402710Z" level=info msg="Forcibly stopping sandbox \"cce266914840a1d211b368eb3853f7adc3e200e15098b220fc1d5da1284aec31\"" Sep 4 17:58:37.468333 containerd[1447]: 2024-09-04 17:58:37.398 [WARNING][4579] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cce266914840a1d211b368eb3853f7adc3e200e15098b220fc1d5da1284aec31" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054--1--0--2--9cde805234.novalocal-k8s-calico--kube--controllers--798cd9c48--zp2dz-eth0", GenerateName:"calico-kube-controllers-798cd9c48-", Namespace:"calico-system", SelfLink:"", UID:"d5415bb8-100d-4692-b66d-5875d35f5aef", ResourceVersion:"770", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 57, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"798cd9c48", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054-1-0-2-9cde805234.novalocal", ContainerID:"3660227b468104fd4a9fe2a67ff27f67c8e6a487b4d38f657caa4d321f14444c", Pod:"calico-kube-controllers-798cd9c48-zp2dz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.50.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5aa53c460eb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:58:37.468333 containerd[1447]: 2024-09-04 17:58:37.399 [INFO][4579] k8s.go 608: Cleaning up netns ContainerID="cce266914840a1d211b368eb3853f7adc3e200e15098b220fc1d5da1284aec31" Sep 4 17:58:37.468333 containerd[1447]: 2024-09-04 17:58:37.399 [INFO][4579] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="cce266914840a1d211b368eb3853f7adc3e200e15098b220fc1d5da1284aec31" iface="eth0" netns="" Sep 4 17:58:37.468333 containerd[1447]: 2024-09-04 17:58:37.399 [INFO][4579] k8s.go 615: Releasing IP address(es) ContainerID="cce266914840a1d211b368eb3853f7adc3e200e15098b220fc1d5da1284aec31" Sep 4 17:58:37.468333 containerd[1447]: 2024-09-04 17:58:37.399 [INFO][4579] utils.go 188: Calico CNI releasing IP address ContainerID="cce266914840a1d211b368eb3853f7adc3e200e15098b220fc1d5da1284aec31" Sep 4 17:58:37.468333 containerd[1447]: 2024-09-04 17:58:37.451 [INFO][4588] ipam_plugin.go 417: Releasing address using handleID ContainerID="cce266914840a1d211b368eb3853f7adc3e200e15098b220fc1d5da1284aec31" HandleID="k8s-pod-network.cce266914840a1d211b368eb3853f7adc3e200e15098b220fc1d5da1284aec31" Workload="ci--4054--1--0--2--9cde805234.novalocal-k8s-calico--kube--controllers--798cd9c48--zp2dz-eth0" Sep 4 17:58:37.468333 containerd[1447]: 2024-09-04 17:58:37.451 [INFO][4588] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:58:37.468333 containerd[1447]: 2024-09-04 17:58:37.451 [INFO][4588] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:58:37.468333 containerd[1447]: 2024-09-04 17:58:37.460 [WARNING][4588] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="cce266914840a1d211b368eb3853f7adc3e200e15098b220fc1d5da1284aec31" HandleID="k8s-pod-network.cce266914840a1d211b368eb3853f7adc3e200e15098b220fc1d5da1284aec31" Workload="ci--4054--1--0--2--9cde805234.novalocal-k8s-calico--kube--controllers--798cd9c48--zp2dz-eth0" Sep 4 17:58:37.468333 containerd[1447]: 2024-09-04 17:58:37.461 [INFO][4588] ipam_plugin.go 445: Releasing address using workloadID ContainerID="cce266914840a1d211b368eb3853f7adc3e200e15098b220fc1d5da1284aec31" HandleID="k8s-pod-network.cce266914840a1d211b368eb3853f7adc3e200e15098b220fc1d5da1284aec31" Workload="ci--4054--1--0--2--9cde805234.novalocal-k8s-calico--kube--controllers--798cd9c48--zp2dz-eth0" Sep 4 17:58:37.468333 containerd[1447]: 2024-09-04 17:58:37.462 [INFO][4588] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:58:37.468333 containerd[1447]: 2024-09-04 17:58:37.464 [INFO][4579] k8s.go 621: Teardown processing complete. ContainerID="cce266914840a1d211b368eb3853f7adc3e200e15098b220fc1d5da1284aec31" Sep 4 17:58:37.468333 containerd[1447]: time="2024-09-04T17:58:37.467861336Z" level=info msg="TearDown network for sandbox \"cce266914840a1d211b368eb3853f7adc3e200e15098b220fc1d5da1284aec31\" successfully" Sep 4 17:58:37.472346 containerd[1447]: time="2024-09-04T17:58:37.472306184Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cce266914840a1d211b368eb3853f7adc3e200e15098b220fc1d5da1284aec31\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:58:37.472403 containerd[1447]: time="2024-09-04T17:58:37.472373695Z" level=info msg="RemovePodSandbox \"cce266914840a1d211b368eb3853f7adc3e200e15098b220fc1d5da1284aec31\" returns successfully" Sep 4 17:58:37.473262 containerd[1447]: time="2024-09-04T17:58:37.473148318Z" level=info msg="StopPodSandbox for \"5907cbb1d3f8d792b51ba195422e07350eeac01d469c6556ac788d6b27c91ae4\"" Sep 4 17:58:37.555932 containerd[1447]: 2024-09-04 17:58:37.516 [WARNING][4607] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5907cbb1d3f8d792b51ba195422e07350eeac01d469c6556ac788d6b27c91ae4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054--1--0--2--9cde805234.novalocal-k8s-coredns--7db6d8ff4d--bkmjk-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"6007fafb-62cf-4f68-b42a-9d6500fa9f55", ResourceVersion:"723", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 57, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054-1-0-2-9cde805234.novalocal", ContainerID:"860d595e60f6bb6d40bfefce0b13ba2be4e6c4637f0ed2a0c9a0cafb08d54fa4", Pod:"coredns-7db6d8ff4d-bkmjk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5f25b8c6abe", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:58:37.555932 containerd[1447]: 2024-09-04 17:58:37.517 [INFO][4607] k8s.go 608: Cleaning up netns ContainerID="5907cbb1d3f8d792b51ba195422e07350eeac01d469c6556ac788d6b27c91ae4" Sep 4 17:58:37.555932 containerd[1447]: 2024-09-04 17:58:37.517 [INFO][4607] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="5907cbb1d3f8d792b51ba195422e07350eeac01d469c6556ac788d6b27c91ae4" iface="eth0" netns="" Sep 4 17:58:37.555932 containerd[1447]: 2024-09-04 17:58:37.517 [INFO][4607] k8s.go 615: Releasing IP address(es) ContainerID="5907cbb1d3f8d792b51ba195422e07350eeac01d469c6556ac788d6b27c91ae4" Sep 4 17:58:37.555932 containerd[1447]: 2024-09-04 17:58:37.517 [INFO][4607] utils.go 188: Calico CNI releasing IP address ContainerID="5907cbb1d3f8d792b51ba195422e07350eeac01d469c6556ac788d6b27c91ae4" Sep 4 17:58:37.555932 containerd[1447]: 2024-09-04 17:58:37.542 [INFO][4613] ipam_plugin.go 417: Releasing address using handleID ContainerID="5907cbb1d3f8d792b51ba195422e07350eeac01d469c6556ac788d6b27c91ae4" HandleID="k8s-pod-network.5907cbb1d3f8d792b51ba195422e07350eeac01d469c6556ac788d6b27c91ae4" Workload="ci--4054--1--0--2--9cde805234.novalocal-k8s-coredns--7db6d8ff4d--bkmjk-eth0" Sep 4 17:58:37.555932 containerd[1447]: 2024-09-04 17:58:37.542 [INFO][4613] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:58:37.555932 containerd[1447]: 2024-09-04 17:58:37.542 [INFO][4613] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:58:37.555932 containerd[1447]: 2024-09-04 17:58:37.549 [WARNING][4613] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="5907cbb1d3f8d792b51ba195422e07350eeac01d469c6556ac788d6b27c91ae4" HandleID="k8s-pod-network.5907cbb1d3f8d792b51ba195422e07350eeac01d469c6556ac788d6b27c91ae4" Workload="ci--4054--1--0--2--9cde805234.novalocal-k8s-coredns--7db6d8ff4d--bkmjk-eth0" Sep 4 17:58:37.555932 containerd[1447]: 2024-09-04 17:58:37.549 [INFO][4613] ipam_plugin.go 445: Releasing address using workloadID ContainerID="5907cbb1d3f8d792b51ba195422e07350eeac01d469c6556ac788d6b27c91ae4" HandleID="k8s-pod-network.5907cbb1d3f8d792b51ba195422e07350eeac01d469c6556ac788d6b27c91ae4" Workload="ci--4054--1--0--2--9cde805234.novalocal-k8s-coredns--7db6d8ff4d--bkmjk-eth0" Sep 4 17:58:37.555932 containerd[1447]: 2024-09-04 17:58:37.552 [INFO][4613] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:58:37.555932 containerd[1447]: 2024-09-04 17:58:37.554 [INFO][4607] k8s.go 621: Teardown processing complete. ContainerID="5907cbb1d3f8d792b51ba195422e07350eeac01d469c6556ac788d6b27c91ae4" Sep 4 17:58:37.557267 containerd[1447]: time="2024-09-04T17:58:37.555932381Z" level=info msg="TearDown network for sandbox \"5907cbb1d3f8d792b51ba195422e07350eeac01d469c6556ac788d6b27c91ae4\" successfully" Sep 4 17:58:37.557267 containerd[1447]: time="2024-09-04T17:58:37.555961362Z" level=info msg="StopPodSandbox for \"5907cbb1d3f8d792b51ba195422e07350eeac01d469c6556ac788d6b27c91ae4\" returns successfully" Sep 4 17:58:37.557267 containerd[1447]: time="2024-09-04T17:58:37.556722522Z" level=info msg="RemovePodSandbox for \"5907cbb1d3f8d792b51ba195422e07350eeac01d469c6556ac788d6b27c91ae4\"" Sep 4 17:58:37.557267 containerd[1447]: time="2024-09-04T17:58:37.556756744Z" level=info msg="Forcibly stopping sandbox \"5907cbb1d3f8d792b51ba195422e07350eeac01d469c6556ac788d6b27c91ae4\"" Sep 4 17:58:37.633761 containerd[1447]: 2024-09-04 17:58:37.597 [WARNING][4631] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5907cbb1d3f8d792b51ba195422e07350eeac01d469c6556ac788d6b27c91ae4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054--1--0--2--9cde805234.novalocal-k8s-coredns--7db6d8ff4d--bkmjk-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"6007fafb-62cf-4f68-b42a-9d6500fa9f55", ResourceVersion:"723", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 57, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054-1-0-2-9cde805234.novalocal", ContainerID:"860d595e60f6bb6d40bfefce0b13ba2be4e6c4637f0ed2a0c9a0cafb08d54fa4", Pod:"coredns-7db6d8ff4d-bkmjk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5f25b8c6abe", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:58:37.633761 containerd[1447]: 2024-09-04 17:58:37.597 [INFO][4631] k8s.go 608: Cleaning up netns ContainerID="5907cbb1d3f8d792b51ba195422e07350eeac01d469c6556ac788d6b27c91ae4" Sep 4 17:58:37.633761 containerd[1447]: 2024-09-04 17:58:37.597 [INFO][4631] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="5907cbb1d3f8d792b51ba195422e07350eeac01d469c6556ac788d6b27c91ae4" iface="eth0" netns="" Sep 4 17:58:37.633761 containerd[1447]: 2024-09-04 17:58:37.597 [INFO][4631] k8s.go 615: Releasing IP address(es) ContainerID="5907cbb1d3f8d792b51ba195422e07350eeac01d469c6556ac788d6b27c91ae4" Sep 4 17:58:37.633761 containerd[1447]: 2024-09-04 17:58:37.597 [INFO][4631] utils.go 188: Calico CNI releasing IP address ContainerID="5907cbb1d3f8d792b51ba195422e07350eeac01d469c6556ac788d6b27c91ae4" Sep 4 17:58:37.633761 containerd[1447]: 2024-09-04 17:58:37.619 [INFO][4638] ipam_plugin.go 417: Releasing address using handleID ContainerID="5907cbb1d3f8d792b51ba195422e07350eeac01d469c6556ac788d6b27c91ae4" HandleID="k8s-pod-network.5907cbb1d3f8d792b51ba195422e07350eeac01d469c6556ac788d6b27c91ae4" Workload="ci--4054--1--0--2--9cde805234.novalocal-k8s-coredns--7db6d8ff4d--bkmjk-eth0" Sep 4 17:58:37.633761 containerd[1447]: 2024-09-04 17:58:37.619 [INFO][4638] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:58:37.633761 containerd[1447]: 2024-09-04 17:58:37.619 [INFO][4638] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:58:37.633761 containerd[1447]: 2024-09-04 17:58:37.626 [WARNING][4638] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="5907cbb1d3f8d792b51ba195422e07350eeac01d469c6556ac788d6b27c91ae4" HandleID="k8s-pod-network.5907cbb1d3f8d792b51ba195422e07350eeac01d469c6556ac788d6b27c91ae4" Workload="ci--4054--1--0--2--9cde805234.novalocal-k8s-coredns--7db6d8ff4d--bkmjk-eth0" Sep 4 17:58:37.633761 containerd[1447]: 2024-09-04 17:58:37.628 [INFO][4638] ipam_plugin.go 445: Releasing address using workloadID ContainerID="5907cbb1d3f8d792b51ba195422e07350eeac01d469c6556ac788d6b27c91ae4" HandleID="k8s-pod-network.5907cbb1d3f8d792b51ba195422e07350eeac01d469c6556ac788d6b27c91ae4" Workload="ci--4054--1--0--2--9cde805234.novalocal-k8s-coredns--7db6d8ff4d--bkmjk-eth0" Sep 4 17:58:37.633761 containerd[1447]: 2024-09-04 17:58:37.630 [INFO][4638] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:58:37.633761 containerd[1447]: 2024-09-04 17:58:37.632 [INFO][4631] k8s.go 621: Teardown processing complete. ContainerID="5907cbb1d3f8d792b51ba195422e07350eeac01d469c6556ac788d6b27c91ae4" Sep 4 17:58:37.634954 containerd[1447]: time="2024-09-04T17:58:37.633813019Z" level=info msg="TearDown network for sandbox \"5907cbb1d3f8d792b51ba195422e07350eeac01d469c6556ac788d6b27c91ae4\" successfully" Sep 4 17:58:37.637451 containerd[1447]: time="2024-09-04T17:58:37.637419238Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5907cbb1d3f8d792b51ba195422e07350eeac01d469c6556ac788d6b27c91ae4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:58:37.637526 containerd[1447]: time="2024-09-04T17:58:37.637483684Z" level=info msg="RemovePodSandbox \"5907cbb1d3f8d792b51ba195422e07350eeac01d469c6556ac788d6b27c91ae4\" returns successfully" Sep 4 17:58:37.638036 containerd[1447]: time="2024-09-04T17:58:37.638006675Z" level=info msg="StopPodSandbox for \"ec6a9f72910d939ef57927dffa061b45449a7cc44cdd2f9a195d7acd6a58fe26\"" Sep 4 17:58:37.719363 containerd[1447]: 2024-09-04 17:58:37.681 [WARNING][4656] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ec6a9f72910d939ef57927dffa061b45449a7cc44cdd2f9a195d7acd6a58fe26" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054--1--0--2--9cde805234.novalocal-k8s-csi--node--driver--287hk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f0db1dfa-f33e-43bf-98b0-16a182e9f9f9", ResourceVersion:"734", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 57, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65cb9bb8f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054-1-0-2-9cde805234.novalocal", ContainerID:"9180b8a03057dbdc80f9efa8756144d5ca7e2125680c05456d463d3a780ffa3c", Pod:"csi-node-driver-287hk", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.50.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali81dd78d8d2b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:58:37.719363 containerd[1447]: 2024-09-04 17:58:37.681 [INFO][4656] k8s.go 608: Cleaning up netns ContainerID="ec6a9f72910d939ef57927dffa061b45449a7cc44cdd2f9a195d7acd6a58fe26" Sep 4 17:58:37.719363 containerd[1447]: 2024-09-04 17:58:37.681 [INFO][4656] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="ec6a9f72910d939ef57927dffa061b45449a7cc44cdd2f9a195d7acd6a58fe26" iface="eth0" netns="" Sep 4 17:58:37.719363 containerd[1447]: 2024-09-04 17:58:37.681 [INFO][4656] k8s.go 615: Releasing IP address(es) ContainerID="ec6a9f72910d939ef57927dffa061b45449a7cc44cdd2f9a195d7acd6a58fe26" Sep 4 17:58:37.719363 containerd[1447]: 2024-09-04 17:58:37.681 [INFO][4656] utils.go 188: Calico CNI releasing IP address ContainerID="ec6a9f72910d939ef57927dffa061b45449a7cc44cdd2f9a195d7acd6a58fe26" Sep 4 17:58:37.719363 containerd[1447]: 2024-09-04 17:58:37.705 [INFO][4663] ipam_plugin.go 417: Releasing address using handleID ContainerID="ec6a9f72910d939ef57927dffa061b45449a7cc44cdd2f9a195d7acd6a58fe26" HandleID="k8s-pod-network.ec6a9f72910d939ef57927dffa061b45449a7cc44cdd2f9a195d7acd6a58fe26" Workload="ci--4054--1--0--2--9cde805234.novalocal-k8s-csi--node--driver--287hk-eth0" Sep 4 17:58:37.719363 containerd[1447]: 2024-09-04 17:58:37.705 [INFO][4663] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:58:37.719363 containerd[1447]: 2024-09-04 17:58:37.706 [INFO][4663] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:58:37.719363 containerd[1447]: 2024-09-04 17:58:37.713 [WARNING][4663] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="ec6a9f72910d939ef57927dffa061b45449a7cc44cdd2f9a195d7acd6a58fe26" HandleID="k8s-pod-network.ec6a9f72910d939ef57927dffa061b45449a7cc44cdd2f9a195d7acd6a58fe26" Workload="ci--4054--1--0--2--9cde805234.novalocal-k8s-csi--node--driver--287hk-eth0" Sep 4 17:58:37.719363 containerd[1447]: 2024-09-04 17:58:37.713 [INFO][4663] ipam_plugin.go 445: Releasing address using workloadID ContainerID="ec6a9f72910d939ef57927dffa061b45449a7cc44cdd2f9a195d7acd6a58fe26" HandleID="k8s-pod-network.ec6a9f72910d939ef57927dffa061b45449a7cc44cdd2f9a195d7acd6a58fe26" Workload="ci--4054--1--0--2--9cde805234.novalocal-k8s-csi--node--driver--287hk-eth0" Sep 4 17:58:37.719363 containerd[1447]: 2024-09-04 17:58:37.715 [INFO][4663] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:58:37.719363 containerd[1447]: 2024-09-04 17:58:37.717 [INFO][4656] k8s.go 621: Teardown processing complete. ContainerID="ec6a9f72910d939ef57927dffa061b45449a7cc44cdd2f9a195d7acd6a58fe26" Sep 4 17:58:37.719363 containerd[1447]: time="2024-09-04T17:58:37.719114139Z" level=info msg="TearDown network for sandbox \"ec6a9f72910d939ef57927dffa061b45449a7cc44cdd2f9a195d7acd6a58fe26\" successfully" Sep 4 17:58:37.719363 containerd[1447]: time="2024-09-04T17:58:37.719139826Z" level=info msg="StopPodSandbox for \"ec6a9f72910d939ef57927dffa061b45449a7cc44cdd2f9a195d7acd6a58fe26\" returns successfully" Sep 4 17:58:37.720733 containerd[1447]: time="2024-09-04T17:58:37.720122263Z" level=info msg="RemovePodSandbox for \"ec6a9f72910d939ef57927dffa061b45449a7cc44cdd2f9a195d7acd6a58fe26\"" Sep 4 17:58:37.720733 containerd[1447]: time="2024-09-04T17:58:37.720216643Z" level=info msg="Forcibly stopping sandbox \"ec6a9f72910d939ef57927dffa061b45449a7cc44cdd2f9a195d7acd6a58fe26\"" Sep 4 17:58:37.817293 containerd[1447]: 2024-09-04 17:58:37.778 [WARNING][4681] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ec6a9f72910d939ef57927dffa061b45449a7cc44cdd2f9a195d7acd6a58fe26" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054--1--0--2--9cde805234.novalocal-k8s-csi--node--driver--287hk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f0db1dfa-f33e-43bf-98b0-16a182e9f9f9", ResourceVersion:"734", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 57, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65cb9bb8f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054-1-0-2-9cde805234.novalocal", ContainerID:"9180b8a03057dbdc80f9efa8756144d5ca7e2125680c05456d463d3a780ffa3c", Pod:"csi-node-driver-287hk", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.50.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali81dd78d8d2b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:58:37.817293 containerd[1447]: 2024-09-04 17:58:37.778 [INFO][4681] k8s.go 608: Cleaning up netns ContainerID="ec6a9f72910d939ef57927dffa061b45449a7cc44cdd2f9a195d7acd6a58fe26" Sep 4 17:58:37.817293 containerd[1447]: 2024-09-04 17:58:37.778 [INFO][4681] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="ec6a9f72910d939ef57927dffa061b45449a7cc44cdd2f9a195d7acd6a58fe26" iface="eth0" netns="" Sep 4 17:58:37.817293 containerd[1447]: 2024-09-04 17:58:37.779 [INFO][4681] k8s.go 615: Releasing IP address(es) ContainerID="ec6a9f72910d939ef57927dffa061b45449a7cc44cdd2f9a195d7acd6a58fe26" Sep 4 17:58:37.817293 containerd[1447]: 2024-09-04 17:58:37.779 [INFO][4681] utils.go 188: Calico CNI releasing IP address ContainerID="ec6a9f72910d939ef57927dffa061b45449a7cc44cdd2f9a195d7acd6a58fe26" Sep 4 17:58:37.817293 containerd[1447]: 2024-09-04 17:58:37.801 [INFO][4688] ipam_plugin.go 417: Releasing address using handleID ContainerID="ec6a9f72910d939ef57927dffa061b45449a7cc44cdd2f9a195d7acd6a58fe26" HandleID="k8s-pod-network.ec6a9f72910d939ef57927dffa061b45449a7cc44cdd2f9a195d7acd6a58fe26" Workload="ci--4054--1--0--2--9cde805234.novalocal-k8s-csi--node--driver--287hk-eth0" Sep 4 17:58:37.817293 containerd[1447]: 2024-09-04 17:58:37.801 [INFO][4688] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:58:37.817293 containerd[1447]: 2024-09-04 17:58:37.801 [INFO][4688] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:58:37.817293 containerd[1447]: 2024-09-04 17:58:37.809 [WARNING][4688] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="ec6a9f72910d939ef57927dffa061b45449a7cc44cdd2f9a195d7acd6a58fe26" HandleID="k8s-pod-network.ec6a9f72910d939ef57927dffa061b45449a7cc44cdd2f9a195d7acd6a58fe26" Workload="ci--4054--1--0--2--9cde805234.novalocal-k8s-csi--node--driver--287hk-eth0" Sep 4 17:58:37.817293 containerd[1447]: 2024-09-04 17:58:37.811 [INFO][4688] ipam_plugin.go 445: Releasing address using workloadID ContainerID="ec6a9f72910d939ef57927dffa061b45449a7cc44cdd2f9a195d7acd6a58fe26" HandleID="k8s-pod-network.ec6a9f72910d939ef57927dffa061b45449a7cc44cdd2f9a195d7acd6a58fe26" Workload="ci--4054--1--0--2--9cde805234.novalocal-k8s-csi--node--driver--287hk-eth0" Sep 4 17:58:37.817293 containerd[1447]: 2024-09-04 17:58:37.813 [INFO][4688] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:58:37.817293 containerd[1447]: 2024-09-04 17:58:37.815 [INFO][4681] k8s.go 621: Teardown processing complete. ContainerID="ec6a9f72910d939ef57927dffa061b45449a7cc44cdd2f9a195d7acd6a58fe26" Sep 4 17:58:37.818449 containerd[1447]: time="2024-09-04T17:58:37.817287400Z" level=info msg="TearDown network for sandbox \"ec6a9f72910d939ef57927dffa061b45449a7cc44cdd2f9a195d7acd6a58fe26\" successfully" Sep 4 17:58:37.829632 containerd[1447]: time="2024-09-04T17:58:37.829580719Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ec6a9f72910d939ef57927dffa061b45449a7cc44cdd2f9a195d7acd6a58fe26\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:58:37.829701 containerd[1447]: time="2024-09-04T17:58:37.829653701Z" level=info msg="RemovePodSandbox \"ec6a9f72910d939ef57927dffa061b45449a7cc44cdd2f9a195d7acd6a58fe26\" returns successfully" Sep 4 17:58:39.162234 containerd[1447]: time="2024-09-04T17:58:39.162022986Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:58:39.164885 containerd[1447]: time="2024-09-04T17:58:39.164682902Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1: active requests=0, bytes read=12907822" Sep 4 17:58:39.167382 containerd[1447]: time="2024-09-04T17:58:39.165979225Z" level=info msg="ImageCreate event name:\"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:58:39.173552 containerd[1447]: time="2024-09-04T17:58:39.173450847Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:58:39.175526 containerd[1447]: time="2024-09-04T17:58:39.175429002Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" with image id \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\", size \"14400175\" in 2.264753817s" Sep 4 17:58:39.175526 containerd[1447]: time="2024-09-04T17:58:39.175512823Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" returns image reference \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\"" Sep 4 17:58:39.184669 containerd[1447]: time="2024-09-04T17:58:39.184577333Z" level=info msg="CreateContainer within sandbox \"9180b8a03057dbdc80f9efa8756144d5ca7e2125680c05456d463d3a780ffa3c\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 4 17:58:39.214979 containerd[1447]: time="2024-09-04T17:58:39.214804576Z" level=info msg="CreateContainer within sandbox \"9180b8a03057dbdc80f9efa8756144d5ca7e2125680c05456d463d3a780ffa3c\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"240579d1dac4771b9ce3c65d197aceafdfe41f5d072e079ca529764e7ae3c5ef\"" Sep 4 17:58:39.224459 containerd[1447]: time="2024-09-04T17:58:39.219843703Z" level=info msg="StartContainer for \"240579d1dac4771b9ce3c65d197aceafdfe41f5d072e079ca529764e7ae3c5ef\"" Sep 4 17:58:39.225954 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1812301373.mount: Deactivated successfully. Sep 4 17:58:39.292402 systemd[1]: Started cri-containerd-240579d1dac4771b9ce3c65d197aceafdfe41f5d072e079ca529764e7ae3c5ef.scope - libcontainer container 240579d1dac4771b9ce3c65d197aceafdfe41f5d072e079ca529764e7ae3c5ef. Sep 4 17:58:39.336758 containerd[1447]: time="2024-09-04T17:58:39.336709578Z" level=info msg="StartContainer for \"240579d1dac4771b9ce3c65d197aceafdfe41f5d072e079ca529764e7ae3c5ef\" returns successfully" Sep 4 17:58:40.272334 kubelet[2661]: I0904 17:58:40.272177 2661 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 4 17:58:40.272334 kubelet[2661]: I0904 17:58:40.272322 2661 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 4 17:58:40.355166 kubelet[2661]: I0904 17:58:40.354297 2661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-287hk" podStartSLOduration=34.312810214 podStartE2EDuration="42.354219359s" podCreationTimestamp="2024-09-04 17:57:58 +0000 UTC" firstStartedPulling="2024-09-04 17:58:31.13755782 +0000 UTC m=+54.519468097" lastFinishedPulling="2024-09-04 17:58:39.178966925 +0000 UTC m=+62.560877242" observedRunningTime="2024-09-04 17:58:40.350958594 +0000 UTC m=+63.732868921" watchObservedRunningTime="2024-09-04 17:58:40.354219359 +0000 UTC m=+63.736129676" Sep 4 17:58:43.049563 kubelet[2661]: I0904 17:58:43.049275 2661 topology_manager.go:215] "Topology Admit Handler" podUID="b59dfbd6-0816-4128-bcb4-be6c9e65b7c1" podNamespace="calico-apiserver" podName="calico-apiserver-5f9bb56749-vgwnp" Sep 4 17:58:43.053295 kubelet[2661]: I0904 17:58:43.053006 2661 topology_manager.go:215] "Topology Admit Handler" podUID="ee096629-3a51-446a-a0dd-d92cc47d8927" podNamespace="calico-apiserver" podName="calico-apiserver-5f9bb56749-ffwj4" Sep 4 17:58:43.091864 systemd[1]: Created slice kubepods-besteffort-podb59dfbd6_0816_4128_bcb4_be6c9e65b7c1.slice - libcontainer container kubepods-besteffort-podb59dfbd6_0816_4128_bcb4_be6c9e65b7c1.slice. Sep 4 17:58:43.109627 systemd[1]: Created slice kubepods-besteffort-podee096629_3a51_446a_a0dd_d92cc47d8927.slice - libcontainer container kubepods-besteffort-podee096629_3a51_446a_a0dd_d92cc47d8927.slice. Sep 4 17:58:43.123394 kubelet[2661]: I0904 17:58:43.123362 2661 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b59dfbd6-0816-4128-bcb4-be6c9e65b7c1-calico-apiserver-certs\") pod \"calico-apiserver-5f9bb56749-vgwnp\" (UID: \"b59dfbd6-0816-4128-bcb4-be6c9e65b7c1\") " pod="calico-apiserver/calico-apiserver-5f9bb56749-vgwnp" Sep 4 17:58:43.123627 kubelet[2661]: I0904 17:58:43.123611 2661 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6j7fw\" (UniqueName: \"kubernetes.io/projected/b59dfbd6-0816-4128-bcb4-be6c9e65b7c1-kube-api-access-6j7fw\") pod \"calico-apiserver-5f9bb56749-vgwnp\" (UID: \"b59dfbd6-0816-4128-bcb4-be6c9e65b7c1\") " pod="calico-apiserver/calico-apiserver-5f9bb56749-vgwnp" Sep 4 17:58:43.123794 kubelet[2661]: I0904 17:58:43.123761 2661 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tv5qp\" (UniqueName: \"kubernetes.io/projected/ee096629-3a51-446a-a0dd-d92cc47d8927-kube-api-access-tv5qp\") pod \"calico-apiserver-5f9bb56749-ffwj4\" (UID: \"ee096629-3a51-446a-a0dd-d92cc47d8927\") " pod="calico-apiserver/calico-apiserver-5f9bb56749-ffwj4" Sep 4 17:58:43.123926 kubelet[2661]: I0904 17:58:43.123909 2661 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ee096629-3a51-446a-a0dd-d92cc47d8927-calico-apiserver-certs\") pod \"calico-apiserver-5f9bb56749-ffwj4\" (UID: \"ee096629-3a51-446a-a0dd-d92cc47d8927\") " pod="calico-apiserver/calico-apiserver-5f9bb56749-ffwj4" Sep 4 17:58:43.404516 containerd[1447]: time="2024-09-04T17:58:43.404364743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f9bb56749-vgwnp,Uid:b59dfbd6-0816-4128-bcb4-be6c9e65b7c1,Namespace:calico-apiserver,Attempt:0,}" Sep 4 17:58:43.421248 containerd[1447]: time="2024-09-04T17:58:43.420682652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f9bb56749-ffwj4,Uid:ee096629-3a51-446a-a0dd-d92cc47d8927,Namespace:calico-apiserver,Attempt:0,}" Sep 4 17:58:43.654876 systemd-networkd[1366]: cali82fa4cbba84: Link UP Sep 4 17:58:43.656194 systemd-networkd[1366]: cali82fa4cbba84: Gained carrier Sep 4 17:58:43.681198 containerd[1447]: 2024-09-04 17:58:43.514 [INFO][4768] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4054--1--0--2--9cde805234.novalocal-k8s-calico--apiserver--5f9bb56749--vgwnp-eth0 calico-apiserver-5f9bb56749- calico-apiserver b59dfbd6-0816-4128-bcb4-be6c9e65b7c1 848 0 2024-09-04 17:58:42 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5f9bb56749 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4054-1-0-2-9cde805234.novalocal calico-apiserver-5f9bb56749-vgwnp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali82fa4cbba84 [] []}} ContainerID="8a24d11a18309439eb4e3f49265a48e1bc5aafa4f4b4335301995c612922b1ee" Namespace="calico-apiserver" Pod="calico-apiserver-5f9bb56749-vgwnp" WorkloadEndpoint="ci--4054--1--0--2--9cde805234.novalocal-k8s-calico--apiserver--5f9bb56749--vgwnp-" Sep 4 17:58:43.681198 containerd[1447]: 2024-09-04 17:58:43.514 [INFO][4768] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8a24d11a18309439eb4e3f49265a48e1bc5aafa4f4b4335301995c612922b1ee" Namespace="calico-apiserver" Pod="calico-apiserver-5f9bb56749-vgwnp" WorkloadEndpoint="ci--4054--1--0--2--9cde805234.novalocal-k8s-calico--apiserver--5f9bb56749--vgwnp-eth0" Sep 4 17:58:43.681198 containerd[1447]: 2024-09-04 17:58:43.578 [INFO][4792] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8a24d11a18309439eb4e3f49265a48e1bc5aafa4f4b4335301995c612922b1ee" HandleID="k8s-pod-network.8a24d11a18309439eb4e3f49265a48e1bc5aafa4f4b4335301995c612922b1ee" Workload="ci--4054--1--0--2--9cde805234.novalocal-k8s-calico--apiserver--5f9bb56749--vgwnp-eth0" Sep 4 17:58:43.681198 containerd[1447]: 2024-09-04 17:58:43.600 [INFO][4792] ipam_plugin.go 270: Auto assigning IP ContainerID="8a24d11a18309439eb4e3f49265a48e1bc5aafa4f4b4335301995c612922b1ee" HandleID="k8s-pod-network.8a24d11a18309439eb4e3f49265a48e1bc5aafa4f4b4335301995c612922b1ee" Workload="ci--4054--1--0--2--9cde805234.novalocal-k8s-calico--apiserver--5f9bb56749--vgwnp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318a60), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4054-1-0-2-9cde805234.novalocal", "pod":"calico-apiserver-5f9bb56749-vgwnp", "timestamp":"2024-09-04 17:58:43.578210391 +0000 UTC"}, Hostname:"ci-4054-1-0-2-9cde805234.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:58:43.681198 containerd[1447]: 2024-09-04 17:58:43.600 [INFO][4792] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:58:43.681198 containerd[1447]: 2024-09-04 17:58:43.600 [INFO][4792] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:58:43.681198 containerd[1447]: 2024-09-04 17:58:43.600 [INFO][4792] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4054-1-0-2-9cde805234.novalocal' Sep 4 17:58:43.681198 containerd[1447]: 2024-09-04 17:58:43.604 [INFO][4792] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8a24d11a18309439eb4e3f49265a48e1bc5aafa4f4b4335301995c612922b1ee" host="ci-4054-1-0-2-9cde805234.novalocal" Sep 4 17:58:43.681198 containerd[1447]: 2024-09-04 17:58:43.613 [INFO][4792] ipam.go 372: Looking up existing affinities for host host="ci-4054-1-0-2-9cde805234.novalocal" Sep 4 17:58:43.681198 containerd[1447]: 2024-09-04 17:58:43.618 [INFO][4792] ipam.go 489: Trying affinity for 192.168.50.192/26 host="ci-4054-1-0-2-9cde805234.novalocal" Sep 4 17:58:43.681198 containerd[1447]: 2024-09-04 17:58:43.620 [INFO][4792] ipam.go 155: Attempting to load block cidr=192.168.50.192/26 host="ci-4054-1-0-2-9cde805234.novalocal" Sep 4 17:58:43.681198 containerd[1447]: 2024-09-04 17:58:43.623 [INFO][4792] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.50.192/26 host="ci-4054-1-0-2-9cde805234.novalocal" Sep 4 17:58:43.681198 containerd[1447]: 2024-09-04 17:58:43.623 [INFO][4792] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.50.192/26 handle="k8s-pod-network.8a24d11a18309439eb4e3f49265a48e1bc5aafa4f4b4335301995c612922b1ee" host="ci-4054-1-0-2-9cde805234.novalocal" Sep 4 17:58:43.681198 containerd[1447]: 2024-09-04 17:58:43.625 [INFO][4792] ipam.go 1685: Creating new handle: k8s-pod-network.8a24d11a18309439eb4e3f49265a48e1bc5aafa4f4b4335301995c612922b1ee Sep 4 17:58:43.681198 containerd[1447]: 2024-09-04 17:58:43.633 [INFO][4792] ipam.go 1203: Writing block in order to claim IPs block=192.168.50.192/26 handle="k8s-pod-network.8a24d11a18309439eb4e3f49265a48e1bc5aafa4f4b4335301995c612922b1ee" host="ci-4054-1-0-2-9cde805234.novalocal" Sep 4 17:58:43.681198 containerd[1447]: 2024-09-04 17:58:43.643 [INFO][4792] ipam.go 1216: Successfully claimed IPs: [192.168.50.197/26] block=192.168.50.192/26 handle="k8s-pod-network.8a24d11a18309439eb4e3f49265a48e1bc5aafa4f4b4335301995c612922b1ee" host="ci-4054-1-0-2-9cde805234.novalocal" Sep 4 17:58:43.681198 containerd[1447]: 2024-09-04 17:58:43.643 [INFO][4792] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.50.197/26] handle="k8s-pod-network.8a24d11a18309439eb4e3f49265a48e1bc5aafa4f4b4335301995c612922b1ee" host="ci-4054-1-0-2-9cde805234.novalocal" Sep 4 17:58:43.681198 containerd[1447]: 2024-09-04 17:58:43.643 [INFO][4792] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:58:43.681198 containerd[1447]: 2024-09-04 17:58:43.643 [INFO][4792] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.50.197/26] IPv6=[] ContainerID="8a24d11a18309439eb4e3f49265a48e1bc5aafa4f4b4335301995c612922b1ee" HandleID="k8s-pod-network.8a24d11a18309439eb4e3f49265a48e1bc5aafa4f4b4335301995c612922b1ee" Workload="ci--4054--1--0--2--9cde805234.novalocal-k8s-calico--apiserver--5f9bb56749--vgwnp-eth0" Sep 4 17:58:43.687173 containerd[1447]: 2024-09-04 17:58:43.647 [INFO][4768] k8s.go 386: Populated endpoint ContainerID="8a24d11a18309439eb4e3f49265a48e1bc5aafa4f4b4335301995c612922b1ee" Namespace="calico-apiserver" Pod="calico-apiserver-5f9bb56749-vgwnp" WorkloadEndpoint="ci--4054--1--0--2--9cde805234.novalocal-k8s-calico--apiserver--5f9bb56749--vgwnp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054--1--0--2--9cde805234.novalocal-k8s-calico--apiserver--5f9bb56749--vgwnp-eth0", GenerateName:"calico-apiserver-5f9bb56749-", Namespace:"calico-apiserver", SelfLink:"", UID:"b59dfbd6-0816-4128-bcb4-be6c9e65b7c1", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 58, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f9bb56749", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054-1-0-2-9cde805234.novalocal", ContainerID:"", Pod:"calico-apiserver-5f9bb56749-vgwnp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.50.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali82fa4cbba84", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:58:43.687173 containerd[1447]: 2024-09-04 17:58:43.647 [INFO][4768] k8s.go 387: Calico CNI using IPs: [192.168.50.197/32] ContainerID="8a24d11a18309439eb4e3f49265a48e1bc5aafa4f4b4335301995c612922b1ee" Namespace="calico-apiserver" Pod="calico-apiserver-5f9bb56749-vgwnp" WorkloadEndpoint="ci--4054--1--0--2--9cde805234.novalocal-k8s-calico--apiserver--5f9bb56749--vgwnp-eth0" Sep 4 17:58:43.687173 containerd[1447]: 2024-09-04 17:58:43.647 [INFO][4768] dataplane_linux.go 68: Setting the host side veth name to cali82fa4cbba84 ContainerID="8a24d11a18309439eb4e3f49265a48e1bc5aafa4f4b4335301995c612922b1ee" Namespace="calico-apiserver" Pod="calico-apiserver-5f9bb56749-vgwnp" WorkloadEndpoint="ci--4054--1--0--2--9cde805234.novalocal-k8s-calico--apiserver--5f9bb56749--vgwnp-eth0" Sep 4 17:58:43.687173 containerd[1447]: 2024-09-04 17:58:43.654 [INFO][4768] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="8a24d11a18309439eb4e3f49265a48e1bc5aafa4f4b4335301995c612922b1ee" Namespace="calico-apiserver" Pod="calico-apiserver-5f9bb56749-vgwnp" WorkloadEndpoint="ci--4054--1--0--2--9cde805234.novalocal-k8s-calico--apiserver--5f9bb56749--vgwnp-eth0" Sep 4 17:58:43.687173 containerd[1447]: 2024-09-04 17:58:43.657 [INFO][4768] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8a24d11a18309439eb4e3f49265a48e1bc5aafa4f4b4335301995c612922b1ee" Namespace="calico-apiserver" Pod="calico-apiserver-5f9bb56749-vgwnp" WorkloadEndpoint="ci--4054--1--0--2--9cde805234.novalocal-k8s-calico--apiserver--5f9bb56749--vgwnp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054--1--0--2--9cde805234.novalocal-k8s-calico--apiserver--5f9bb56749--vgwnp-eth0", GenerateName:"calico-apiserver-5f9bb56749-", Namespace:"calico-apiserver", SelfLink:"", UID:"b59dfbd6-0816-4128-bcb4-be6c9e65b7c1", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 58, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f9bb56749", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054-1-0-2-9cde805234.novalocal", ContainerID:"8a24d11a18309439eb4e3f49265a48e1bc5aafa4f4b4335301995c612922b1ee", Pod:"calico-apiserver-5f9bb56749-vgwnp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.50.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali82fa4cbba84", MAC:"ae:71:c6:16:f2:71", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:58:43.687173 containerd[1447]: 2024-09-04 17:58:43.671 [INFO][4768] k8s.go 500: Wrote updated endpoint to datastore ContainerID="8a24d11a18309439eb4e3f49265a48e1bc5aafa4f4b4335301995c612922b1ee" Namespace="calico-apiserver" Pod="calico-apiserver-5f9bb56749-vgwnp" WorkloadEndpoint="ci--4054--1--0--2--9cde805234.novalocal-k8s-calico--apiserver--5f9bb56749--vgwnp-eth0" Sep 4 17:58:43.735841 systemd-networkd[1366]: calie5505686205: Link UP Sep 4 17:58:43.736754 systemd-networkd[1366]: calie5505686205: Gained carrier Sep 4 17:58:43.763813 containerd[1447]: 2024-09-04 17:58:43.530 [INFO][4778] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4054--1--0--2--9cde805234.novalocal-k8s-calico--apiserver--5f9bb56749--ffwj4-eth0 calico-apiserver-5f9bb56749- calico-apiserver ee096629-3a51-446a-a0dd-d92cc47d8927 849 0 2024-09-04 17:58:42 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5f9bb56749 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4054-1-0-2-9cde805234.novalocal calico-apiserver-5f9bb56749-ffwj4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie5505686205 [] []}} ContainerID="54a40b6858bfaf7c80b28bf2f62a5e300f9ccb2bd3776a924a99d6aff5234043" Namespace="calico-apiserver" Pod="calico-apiserver-5f9bb56749-ffwj4" WorkloadEndpoint="ci--4054--1--0--2--9cde805234.novalocal-k8s-calico--apiserver--5f9bb56749--ffwj4-" Sep 4 17:58:43.763813 containerd[1447]: 2024-09-04 17:58:43.532 [INFO][4778] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="54a40b6858bfaf7c80b28bf2f62a5e300f9ccb2bd3776a924a99d6aff5234043" Namespace="calico-apiserver" Pod="calico-apiserver-5f9bb56749-ffwj4" WorkloadEndpoint="ci--4054--1--0--2--9cde805234.novalocal-k8s-calico--apiserver--5f9bb56749--ffwj4-eth0" Sep 4 17:58:43.763813 containerd[1447]: 2024-09-04 17:58:43.595 [INFO][4796] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="54a40b6858bfaf7c80b28bf2f62a5e300f9ccb2bd3776a924a99d6aff5234043" HandleID="k8s-pod-network.54a40b6858bfaf7c80b28bf2f62a5e300f9ccb2bd3776a924a99d6aff5234043" Workload="ci--4054--1--0--2--9cde805234.novalocal-k8s-calico--apiserver--5f9bb56749--ffwj4-eth0" Sep 4 17:58:43.763813 containerd[1447]: 2024-09-04 17:58:43.610 [INFO][4796] ipam_plugin.go 270: Auto assigning IP ContainerID="54a40b6858bfaf7c80b28bf2f62a5e300f9ccb2bd3776a924a99d6aff5234043" HandleID="k8s-pod-network.54a40b6858bfaf7c80b28bf2f62a5e300f9ccb2bd3776a924a99d6aff5234043" Workload="ci--4054--1--0--2--9cde805234.novalocal-k8s-calico--apiserver--5f9bb56749--ffwj4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00035f550), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4054-1-0-2-9cde805234.novalocal", "pod":"calico-apiserver-5f9bb56749-ffwj4", "timestamp":"2024-09-04 17:58:43.595063355 +0000 UTC"}, Hostname:"ci-4054-1-0-2-9cde805234.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:58:43.763813 containerd[1447]: 2024-09-04 17:58:43.610 [INFO][4796] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:58:43.763813 containerd[1447]: 2024-09-04 17:58:43.643 [INFO][4796] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:58:43.763813 containerd[1447]: 2024-09-04 17:58:43.643 [INFO][4796] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4054-1-0-2-9cde805234.novalocal' Sep 4 17:58:43.763813 containerd[1447]: 2024-09-04 17:58:43.653 [INFO][4796] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.54a40b6858bfaf7c80b28bf2f62a5e300f9ccb2bd3776a924a99d6aff5234043" host="ci-4054-1-0-2-9cde805234.novalocal" Sep 4 17:58:43.763813 containerd[1447]: 2024-09-04 17:58:43.669 [INFO][4796] ipam.go 372: Looking up existing affinities for host host="ci-4054-1-0-2-9cde805234.novalocal" Sep 4 17:58:43.763813 containerd[1447]: 2024-09-04 17:58:43.678 [INFO][4796] ipam.go 489: Trying affinity for 192.168.50.192/26 host="ci-4054-1-0-2-9cde805234.novalocal" Sep 4 17:58:43.763813 containerd[1447]: 2024-09-04 17:58:43.689 [INFO][4796] ipam.go 155: Attempting to load block cidr=192.168.50.192/26 host="ci-4054-1-0-2-9cde805234.novalocal" Sep 4 17:58:43.763813 containerd[1447]: 2024-09-04 17:58:43.705 [INFO][4796] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.50.192/26 host="ci-4054-1-0-2-9cde805234.novalocal" Sep 4 17:58:43.763813 containerd[1447]: 2024-09-04 17:58:43.705 [INFO][4796] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.50.192/26 handle="k8s-pod-network.54a40b6858bfaf7c80b28bf2f62a5e300f9ccb2bd3776a924a99d6aff5234043" host="ci-4054-1-0-2-9cde805234.novalocal" Sep 4 17:58:43.763813 containerd[1447]: 2024-09-04 17:58:43.709 [INFO][4796] ipam.go 1685: Creating new handle: k8s-pod-network.54a40b6858bfaf7c80b28bf2f62a5e300f9ccb2bd3776a924a99d6aff5234043 Sep 4 17:58:43.763813 containerd[1447]: 2024-09-04 17:58:43.718 [INFO][4796] ipam.go 1203: Writing block in order to claim IPs block=192.168.50.192/26 handle="k8s-pod-network.54a40b6858bfaf7c80b28bf2f62a5e300f9ccb2bd3776a924a99d6aff5234043" host="ci-4054-1-0-2-9cde805234.novalocal" Sep 4 17:58:43.763813 containerd[1447]: 2024-09-04 17:58:43.727 [INFO][4796] ipam.go 1216: Successfully claimed IPs: [192.168.50.198/26] block=192.168.50.192/26 handle="k8s-pod-network.54a40b6858bfaf7c80b28bf2f62a5e300f9ccb2bd3776a924a99d6aff5234043" host="ci-4054-1-0-2-9cde805234.novalocal" Sep 4 17:58:43.763813 containerd[1447]: 2024-09-04 17:58:43.727 [INFO][4796] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.50.198/26] handle="k8s-pod-network.54a40b6858bfaf7c80b28bf2f62a5e300f9ccb2bd3776a924a99d6aff5234043" host="ci-4054-1-0-2-9cde805234.novalocal" Sep 4 17:58:43.763813 containerd[1447]: 2024-09-04 17:58:43.727 [INFO][4796] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:58:43.763813 containerd[1447]: 2024-09-04 17:58:43.727 [INFO][4796] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.50.198/26] IPv6=[] ContainerID="54a40b6858bfaf7c80b28bf2f62a5e300f9ccb2bd3776a924a99d6aff5234043" HandleID="k8s-pod-network.54a40b6858bfaf7c80b28bf2f62a5e300f9ccb2bd3776a924a99d6aff5234043" Workload="ci--4054--1--0--2--9cde805234.novalocal-k8s-calico--apiserver--5f9bb56749--ffwj4-eth0" Sep 4 17:58:43.764675 containerd[1447]: 2024-09-04 17:58:43.731 [INFO][4778] k8s.go 386: Populated endpoint ContainerID="54a40b6858bfaf7c80b28bf2f62a5e300f9ccb2bd3776a924a99d6aff5234043" Namespace="calico-apiserver" Pod="calico-apiserver-5f9bb56749-ffwj4" WorkloadEndpoint="ci--4054--1--0--2--9cde805234.novalocal-k8s-calico--apiserver--5f9bb56749--ffwj4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054--1--0--2--9cde805234.novalocal-k8s-calico--apiserver--5f9bb56749--ffwj4-eth0", GenerateName:"calico-apiserver-5f9bb56749-", Namespace:"calico-apiserver", SelfLink:"", UID:"ee096629-3a51-446a-a0dd-d92cc47d8927", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 58, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f9bb56749", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054-1-0-2-9cde805234.novalocal", ContainerID:"", Pod:"calico-apiserver-5f9bb56749-ffwj4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.50.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie5505686205", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:58:43.764675 containerd[1447]: 2024-09-04 17:58:43.731 [INFO][4778] k8s.go 387: Calico CNI using IPs: [192.168.50.198/32] ContainerID="54a40b6858bfaf7c80b28bf2f62a5e300f9ccb2bd3776a924a99d6aff5234043" Namespace="calico-apiserver" Pod="calico-apiserver-5f9bb56749-ffwj4" WorkloadEndpoint="ci--4054--1--0--2--9cde805234.novalocal-k8s-calico--apiserver--5f9bb56749--ffwj4-eth0" Sep 4 17:58:43.764675 containerd[1447]: 2024-09-04 17:58:43.731 [INFO][4778] dataplane_linux.go 68: Setting the host side veth name to calie5505686205 ContainerID="54a40b6858bfaf7c80b28bf2f62a5e300f9ccb2bd3776a924a99d6aff5234043" Namespace="calico-apiserver" Pod="calico-apiserver-5f9bb56749-ffwj4" WorkloadEndpoint="ci--4054--1--0--2--9cde805234.novalocal-k8s-calico--apiserver--5f9bb56749--ffwj4-eth0" Sep 4 17:58:43.764675 containerd[1447]: 2024-09-04 17:58:43.737 [INFO][4778] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="54a40b6858bfaf7c80b28bf2f62a5e300f9ccb2bd3776a924a99d6aff5234043" Namespace="calico-apiserver" Pod="calico-apiserver-5f9bb56749-ffwj4" WorkloadEndpoint="ci--4054--1--0--2--9cde805234.novalocal-k8s-calico--apiserver--5f9bb56749--ffwj4-eth0" Sep 4 17:58:43.764675 containerd[1447]: 2024-09-04 17:58:43.737 [INFO][4778] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="54a40b6858bfaf7c80b28bf2f62a5e300f9ccb2bd3776a924a99d6aff5234043" Namespace="calico-apiserver" Pod="calico-apiserver-5f9bb56749-ffwj4" WorkloadEndpoint="ci--4054--1--0--2--9cde805234.novalocal-k8s-calico--apiserver--5f9bb56749--ffwj4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054--1--0--2--9cde805234.novalocal-k8s-calico--apiserver--5f9bb56749--ffwj4-eth0", GenerateName:"calico-apiserver-5f9bb56749-", Namespace:"calico-apiserver", SelfLink:"", UID:"ee096629-3a51-446a-a0dd-d92cc47d8927", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 58, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f9bb56749", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054-1-0-2-9cde805234.novalocal", ContainerID:"54a40b6858bfaf7c80b28bf2f62a5e300f9ccb2bd3776a924a99d6aff5234043", Pod:"calico-apiserver-5f9bb56749-ffwj4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.50.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie5505686205", MAC:"66:13:39:1e:f7:df", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:58:43.764675 containerd[1447]: 2024-09-04 17:58:43.758 [INFO][4778] k8s.go 500: Wrote updated endpoint to datastore ContainerID="54a40b6858bfaf7c80b28bf2f62a5e300f9ccb2bd3776a924a99d6aff5234043" Namespace="calico-apiserver" Pod="calico-apiserver-5f9bb56749-ffwj4" WorkloadEndpoint="ci--4054--1--0--2--9cde805234.novalocal-k8s-calico--apiserver--5f9bb56749--ffwj4-eth0" Sep 4 17:58:43.774053 containerd[1447]: time="2024-09-04T17:58:43.773690962Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:58:43.774053 containerd[1447]: time="2024-09-04T17:58:43.773766610Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:58:43.774053 containerd[1447]: time="2024-09-04T17:58:43.773782428Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:58:43.774053 containerd[1447]: time="2024-09-04T17:58:43.773887169Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:58:43.811488 systemd[1]: Started cri-containerd-8a24d11a18309439eb4e3f49265a48e1bc5aafa4f4b4335301995c612922b1ee.scope - libcontainer container 8a24d11a18309439eb4e3f49265a48e1bc5aafa4f4b4335301995c612922b1ee. Sep 4 17:58:43.852495 containerd[1447]: time="2024-09-04T17:58:43.852010632Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:58:43.852495 containerd[1447]: time="2024-09-04T17:58:43.852375016Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:58:43.852495 containerd[1447]: time="2024-09-04T17:58:43.852404460Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:58:43.853092 containerd[1447]: time="2024-09-04T17:58:43.852593003Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:58:43.875434 systemd[1]: Started cri-containerd-54a40b6858bfaf7c80b28bf2f62a5e300f9ccb2bd3776a924a99d6aff5234043.scope - libcontainer container 54a40b6858bfaf7c80b28bf2f62a5e300f9ccb2bd3776a924a99d6aff5234043. Sep 4 17:58:43.924624 containerd[1447]: time="2024-09-04T17:58:43.923915180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f9bb56749-vgwnp,Uid:b59dfbd6-0816-4128-bcb4-be6c9e65b7c1,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"8a24d11a18309439eb4e3f49265a48e1bc5aafa4f4b4335301995c612922b1ee\"" Sep 4 17:58:43.932888 containerd[1447]: time="2024-09-04T17:58:43.931647261Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Sep 4 17:58:43.976935 containerd[1447]: time="2024-09-04T17:58:43.976883147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f9bb56749-ffwj4,Uid:ee096629-3a51-446a-a0dd-d92cc47d8927,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"54a40b6858bfaf7c80b28bf2f62a5e300f9ccb2bd3776a924a99d6aff5234043\"" Sep 4 17:58:45.449612 systemd-networkd[1366]: calie5505686205: Gained IPv6LL Sep 4 17:58:45.577810 systemd-networkd[1366]: cali82fa4cbba84: Gained IPv6LL Sep 4 17:58:48.584670 containerd[1447]: time="2024-09-04T17:58:48.584518834Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:58:48.587292 containerd[1447]: time="2024-09-04T17:58:48.587117848Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=40419849" Sep 4 17:58:48.589615 containerd[1447]: time="2024-09-04T17:58:48.589485657Z" level=info msg="ImageCreate event name:\"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:58:48.606879 containerd[1447]: time="2024-09-04T17:58:48.606715471Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:58:48.609824 containerd[1447]: time="2024-09-04T17:58:48.609123223Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"41912266\" in 4.677298319s" Sep 4 17:58:48.609824 containerd[1447]: time="2024-09-04T17:58:48.609196980Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\"" Sep 4 17:58:48.612950 containerd[1447]: time="2024-09-04T17:58:48.612710693Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Sep 4 17:58:48.616817 containerd[1447]: time="2024-09-04T17:58:48.616495434Z" level=info msg="CreateContainer within sandbox \"8a24d11a18309439eb4e3f49265a48e1bc5aafa4f4b4335301995c612922b1ee\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 4 17:58:48.679862 containerd[1447]: time="2024-09-04T17:58:48.679685090Z" level=info msg="CreateContainer within sandbox \"8a24d11a18309439eb4e3f49265a48e1bc5aafa4f4b4335301995c612922b1ee\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8f2800c171ef98e41eabcd15fdfe9e2d8754b45f10292c13076029c0467ee60f\"" Sep 4 17:58:48.681304 containerd[1447]: time="2024-09-04T17:58:48.681268859Z" level=info msg="StartContainer for \"8f2800c171ef98e41eabcd15fdfe9e2d8754b45f10292c13076029c0467ee60f\"" Sep 4 17:58:48.730424 systemd[1]: Started cri-containerd-8f2800c171ef98e41eabcd15fdfe9e2d8754b45f10292c13076029c0467ee60f.scope - libcontainer container 8f2800c171ef98e41eabcd15fdfe9e2d8754b45f10292c13076029c0467ee60f. Sep 4 17:58:48.779724 containerd[1447]: time="2024-09-04T17:58:48.779662733Z" level=info msg="StartContainer for \"8f2800c171ef98e41eabcd15fdfe9e2d8754b45f10292c13076029c0467ee60f\" returns successfully" Sep 4 17:58:49.384794 containerd[1447]: time="2024-09-04T17:58:49.383373647Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:58:49.386214 containerd[1447]: time="2024-09-04T17:58:49.386069696Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=77" Sep 4 17:58:49.390701 containerd[1447]: time="2024-09-04T17:58:49.390635465Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"41912266\" in 777.517775ms" Sep 4 17:58:49.390891 containerd[1447]: time="2024-09-04T17:58:49.390872912Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\"" Sep 4 17:58:49.395510 containerd[1447]: time="2024-09-04T17:58:49.395459198Z" level=info msg="CreateContainer within sandbox \"54a40b6858bfaf7c80b28bf2f62a5e300f9ccb2bd3776a924a99d6aff5234043\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 4 17:58:49.428537 containerd[1447]: time="2024-09-04T17:58:49.427839800Z" level=info msg="CreateContainer within sandbox \"54a40b6858bfaf7c80b28bf2f62a5e300f9ccb2bd3776a924a99d6aff5234043\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ef71a019fb11f81745257c2aa94c47c506e8e4f4f767c420390146f40961d060\"" Sep 4 17:58:49.429020 containerd[1447]: time="2024-09-04T17:58:49.428999322Z" level=info msg="StartContainer for \"ef71a019fb11f81745257c2aa94c47c506e8e4f4f767c420390146f40961d060\"" Sep 4 17:58:49.479505 systemd[1]: Started cri-containerd-ef71a019fb11f81745257c2aa94c47c506e8e4f4f767c420390146f40961d060.scope - libcontainer container ef71a019fb11f81745257c2aa94c47c506e8e4f4f767c420390146f40961d060. Sep 4 17:58:49.516478 kubelet[2661]: I0904 17:58:49.516044 2661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5f9bb56749-vgwnp" podStartSLOduration=2.835482518 podStartE2EDuration="7.516019389s" podCreationTimestamp="2024-09-04 17:58:42 +0000 UTC" firstStartedPulling="2024-09-04 17:58:43.930780934 +0000 UTC m=+67.312691201" lastFinishedPulling="2024-09-04 17:58:48.611317714 +0000 UTC m=+71.993228072" observedRunningTime="2024-09-04 17:58:49.369578994 +0000 UTC m=+72.751489271" watchObservedRunningTime="2024-09-04 17:58:49.516019389 +0000 UTC m=+72.897929657" Sep 4 17:58:49.576947 containerd[1447]: time="2024-09-04T17:58:49.576892853Z" level=info msg="StartContainer for \"ef71a019fb11f81745257c2aa94c47c506e8e4f4f767c420390146f40961d060\" returns successfully" Sep 4 17:58:50.392122 kubelet[2661]: I0904 17:58:50.392047 2661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5f9bb56749-ffwj4" podStartSLOduration=2.98315912 podStartE2EDuration="8.39202633s" podCreationTimestamp="2024-09-04 17:58:42 +0000 UTC" firstStartedPulling="2024-09-04 17:58:43.983433678 +0000 UTC m=+67.365343955" lastFinishedPulling="2024-09-04 17:58:49.392300888 +0000 UTC m=+72.774211165" observedRunningTime="2024-09-04 17:58:50.389146995 +0000 UTC m=+73.771057262" watchObservedRunningTime="2024-09-04 17:58:50.39202633 +0000 UTC m=+73.773936607" Sep 4 17:58:50.789774 systemd[1]: Started sshd@9-172.24.4.18:22-172.24.4.1:45884.service - OpenSSH per-connection server daemon (172.24.4.1:45884). Sep 4 17:58:52.195734 sshd[5041]: Accepted publickey for core from 172.24.4.1 port 45884 ssh2: RSA SHA256:JnA7Fh8lVkr6ENifNOXj431OPLJBOL+/PI8dMas4Eok Sep 4 17:58:52.201366 sshd[5041]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:58:52.211865 systemd-logind[1432]: New session 12 of user core. Sep 4 17:58:52.219261 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 4 17:58:53.751088 sshd[5041]: pam_unix(sshd:session): session closed for user core Sep 4 17:58:53.766130 systemd-logind[1432]: Session 12 logged out. Waiting for processes to exit. Sep 4 17:58:53.768446 systemd[1]: sshd@9-172.24.4.18:22-172.24.4.1:45884.service: Deactivated successfully. Sep 4 17:58:53.774564 systemd[1]: session-12.scope: Deactivated successfully. Sep 4 17:58:53.777428 systemd-logind[1432]: Removed session 12. Sep 4 17:58:58.777873 systemd[1]: Started sshd@10-172.24.4.18:22-172.24.4.1:43260.service - OpenSSH per-connection server daemon (172.24.4.1:43260). Sep 4 17:59:00.041517 sshd[5067]: Accepted publickey for core from 172.24.4.1 port 43260 ssh2: RSA SHA256:JnA7Fh8lVkr6ENifNOXj431OPLJBOL+/PI8dMas4Eok Sep 4 17:59:00.042159 sshd[5067]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:59:00.048581 systemd-logind[1432]: New session 13 of user core. Sep 4 17:59:00.054436 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 4 17:59:01.295854 sshd[5067]: pam_unix(sshd:session): session closed for user core Sep 4 17:59:01.310451 systemd[1]: sshd@10-172.24.4.18:22-172.24.4.1:43260.service: Deactivated successfully. Sep 4 17:59:01.313415 systemd[1]: session-13.scope: Deactivated successfully. Sep 4 17:59:01.314852 systemd-logind[1432]: Session 13 logged out. Waiting for processes to exit. Sep 4 17:59:01.317595 systemd-logind[1432]: Removed session 13. Sep 4 17:59:06.314852 systemd[1]: Started sshd@11-172.24.4.18:22-172.24.4.1:51072.service - OpenSSH per-connection server daemon (172.24.4.1:51072). Sep 4 17:59:07.696950 sshd[5090]: Accepted publickey for core from 172.24.4.1 port 51072 ssh2: RSA SHA256:JnA7Fh8lVkr6ENifNOXj431OPLJBOL+/PI8dMas4Eok Sep 4 17:59:07.702588 sshd[5090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:59:07.714529 systemd-logind[1432]: New session 14 of user core. Sep 4 17:59:07.723628 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 4 17:59:08.735047 sshd[5090]: pam_unix(sshd:session): session closed for user core Sep 4 17:59:08.745764 systemd[1]: sshd@11-172.24.4.18:22-172.24.4.1:51072.service: Deactivated successfully. Sep 4 17:59:08.749086 systemd[1]: session-14.scope: Deactivated successfully. Sep 4 17:59:08.753318 systemd-logind[1432]: Session 14 logged out. Waiting for processes to exit. Sep 4 17:59:08.759565 systemd[1]: Started sshd@12-172.24.4.18:22-172.24.4.1:51076.service - OpenSSH per-connection server daemon (172.24.4.1:51076). Sep 4 17:59:08.761538 systemd-logind[1432]: Removed session 14. Sep 4 17:59:10.271331 sshd[5131]: Accepted publickey for core from 172.24.4.1 port 51076 ssh2: RSA SHA256:JnA7Fh8lVkr6ENifNOXj431OPLJBOL+/PI8dMas4Eok Sep 4 17:59:10.273962 sshd[5131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:59:10.286908 systemd-logind[1432]: New session 15 of user core. Sep 4 17:59:10.296625 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 4 17:59:11.154823 sshd[5131]: pam_unix(sshd:session): session closed for user core Sep 4 17:59:11.168515 systemd[1]: sshd@12-172.24.4.18:22-172.24.4.1:51076.service: Deactivated successfully. Sep 4 17:59:11.172824 systemd[1]: session-15.scope: Deactivated successfully. Sep 4 17:59:11.178306 systemd-logind[1432]: Session 15 logged out. Waiting for processes to exit. Sep 4 17:59:11.188868 systemd[1]: Started sshd@13-172.24.4.18:22-172.24.4.1:51092.service - OpenSSH per-connection server daemon (172.24.4.1:51092). Sep 4 17:59:11.192917 systemd-logind[1432]: Removed session 15. Sep 4 17:59:12.703312 sshd[5142]: Accepted publickey for core from 172.24.4.1 port 51092 ssh2: RSA SHA256:JnA7Fh8lVkr6ENifNOXj431OPLJBOL+/PI8dMas4Eok Sep 4 17:59:12.706337 sshd[5142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:59:12.717423 systemd-logind[1432]: New session 16 of user core. Sep 4 17:59:12.726551 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 4 17:59:13.619543 sshd[5142]: pam_unix(sshd:session): session closed for user core Sep 4 17:59:13.629752 systemd[1]: sshd@13-172.24.4.18:22-172.24.4.1:51092.service: Deactivated successfully. Sep 4 17:59:13.636060 systemd[1]: session-16.scope: Deactivated successfully. Sep 4 17:59:13.637704 systemd-logind[1432]: Session 16 logged out. Waiting for processes to exit. Sep 4 17:59:13.639993 systemd-logind[1432]: Removed session 16. Sep 4 17:59:18.643124 systemd[1]: Started sshd@14-172.24.4.18:22-172.24.4.1:44440.service - OpenSSH per-connection server daemon (172.24.4.1:44440). Sep 4 17:59:20.348489 sshd[5185]: Accepted publickey for core from 172.24.4.1 port 44440 ssh2: RSA SHA256:JnA7Fh8lVkr6ENifNOXj431OPLJBOL+/PI8dMas4Eok Sep 4 17:59:20.352154 sshd[5185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:59:20.364062 systemd-logind[1432]: New session 17 of user core. Sep 4 17:59:20.371591 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 4 17:59:21.265059 sshd[5185]: pam_unix(sshd:session): session closed for user core Sep 4 17:59:21.271527 systemd-logind[1432]: Session 17 logged out. Waiting for processes to exit. Sep 4 17:59:21.272152 systemd[1]: sshd@14-172.24.4.18:22-172.24.4.1:44440.service: Deactivated successfully. Sep 4 17:59:21.275077 systemd[1]: session-17.scope: Deactivated successfully. Sep 4 17:59:21.277374 systemd-logind[1432]: Removed session 17. Sep 4 17:59:26.291861 systemd[1]: Started sshd@15-172.24.4.18:22-172.24.4.1:48838.service - OpenSSH per-connection server daemon (172.24.4.1:48838). Sep 4 17:59:27.655162 sshd[5202]: Accepted publickey for core from 172.24.4.1 port 48838 ssh2: RSA SHA256:JnA7Fh8lVkr6ENifNOXj431OPLJBOL+/PI8dMas4Eok Sep 4 17:59:27.684775 sshd[5202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:59:27.706454 systemd-logind[1432]: New session 18 of user core. Sep 4 17:59:27.718019 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 4 17:59:28.697726 sshd[5202]: pam_unix(sshd:session): session closed for user core Sep 4 17:59:28.706737 systemd[1]: sshd@15-172.24.4.18:22-172.24.4.1:48838.service: Deactivated successfully. Sep 4 17:59:28.711780 systemd[1]: session-18.scope: Deactivated successfully. Sep 4 17:59:28.714423 systemd-logind[1432]: Session 18 logged out. Waiting for processes to exit. Sep 4 17:59:28.717220 systemd-logind[1432]: Removed session 18. Sep 4 17:59:33.719955 systemd[1]: Started sshd@16-172.24.4.18:22-172.24.4.1:48846.service - OpenSSH per-connection server daemon (172.24.4.1:48846). Sep 4 17:59:34.014672 update_engine[1433]: I0904 17:59:34.014526 1433 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Sep 4 17:59:34.015297 update_engine[1433]: I0904 17:59:34.015198 1433 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Sep 4 17:59:34.026369 update_engine[1433]: I0904 17:59:34.026122 1433 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Sep 4 17:59:34.028679 update_engine[1433]: I0904 17:59:34.028569 1433 omaha_request_params.cc:62] Current group set to beta Sep 4 17:59:34.028789 update_engine[1433]: I0904 17:59:34.028718 1433 update_attempter.cc:499] Already updated boot flags. Skipping. Sep 4 17:59:34.028789 update_engine[1433]: I0904 17:59:34.028725 1433 update_attempter.cc:643] Scheduling an action processor start. Sep 4 17:59:34.028789 update_engine[1433]: I0904 17:59:34.028748 1433 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 4 17:59:34.029333 update_engine[1433]: I0904 17:59:34.028795 1433 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Sep 4 17:59:34.029333 update_engine[1433]: I0904 17:59:34.028856 1433 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 4 17:59:34.029333 update_engine[1433]: I0904 17:59:34.028862 1433 omaha_request_action.cc:272] Request: Sep 4 17:59:34.029333 update_engine[1433]: Sep 4 17:59:34.029333 update_engine[1433]: Sep 4 17:59:34.029333 update_engine[1433]: Sep 4 17:59:34.029333 update_engine[1433]: Sep 4 17:59:34.029333 update_engine[1433]: Sep 4 17:59:34.029333 update_engine[1433]: Sep 4 17:59:34.029333 update_engine[1433]: Sep 4 17:59:34.029333 update_engine[1433]: Sep 4 17:59:34.029333 update_engine[1433]: I0904 17:59:34.028867 1433 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 4 17:59:34.059298 update_engine[1433]: I0904 17:59:34.058857 1433 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 4 17:59:34.059298 update_engine[1433]: I0904 17:59:34.059190 1433 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 4 17:59:34.073211 update_engine[1433]: E0904 17:59:34.071870 1433 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 4 17:59:34.073211 update_engine[1433]: I0904 17:59:34.071978 1433 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Sep 4 17:59:34.073478 locksmithd[1457]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Sep 4 17:59:34.973143 systemd[1]: run-containerd-runc-k8s.io-6675665c9257cb85b92f5a4c2344e7d8623f2c4a8d17c12253e004a713109850-runc.9cy0Ee.mount: Deactivated successfully. Sep 4 17:59:35.058390 sshd[5220]: Accepted publickey for core from 172.24.4.1 port 48846 ssh2: RSA SHA256:JnA7Fh8lVkr6ENifNOXj431OPLJBOL+/PI8dMas4Eok Sep 4 17:59:35.091739 sshd[5220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:59:35.204309 systemd-logind[1432]: New session 19 of user core. Sep 4 17:59:35.207620 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 4 17:59:36.453999 sshd[5220]: pam_unix(sshd:session): session closed for user core Sep 4 17:59:36.464076 systemd-logind[1432]: Session 19 logged out. Waiting for processes to exit. Sep 4 17:59:36.469680 systemd[1]: Started sshd@17-172.24.4.18:22-172.24.4.1:35424.service - OpenSSH per-connection server daemon (172.24.4.1:35424). Sep 4 17:59:36.472083 systemd[1]: sshd@16-172.24.4.18:22-172.24.4.1:48846.service: Deactivated successfully. Sep 4 17:59:36.473914 systemd[1]: session-19.scope: Deactivated successfully. Sep 4 17:59:36.475951 systemd-logind[1432]: Removed session 19. Sep 4 17:59:38.050673 sshd[5251]: Accepted publickey for core from 172.24.4.1 port 35424 ssh2: RSA SHA256:JnA7Fh8lVkr6ENifNOXj431OPLJBOL+/PI8dMas4Eok Sep 4 17:59:38.053827 sshd[5251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:59:38.067851 systemd-logind[1432]: New session 20 of user core. Sep 4 17:59:38.077580 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 4 17:59:39.725558 sshd[5251]: pam_unix(sshd:session): session closed for user core Sep 4 17:59:39.739060 systemd[1]: Started sshd@18-172.24.4.18:22-172.24.4.1:35434.service - OpenSSH per-connection server daemon (172.24.4.1:35434). Sep 4 17:59:39.741883 systemd[1]: sshd@17-172.24.4.18:22-172.24.4.1:35424.service: Deactivated successfully. Sep 4 17:59:39.747448 systemd[1]: session-20.scope: Deactivated successfully. Sep 4 17:59:39.756806 systemd-logind[1432]: Session 20 logged out. Waiting for processes to exit. Sep 4 17:59:39.762723 systemd-logind[1432]: Removed session 20. Sep 4 17:59:41.284940 sshd[5291]: Accepted publickey for core from 172.24.4.1 port 35434 ssh2: RSA SHA256:JnA7Fh8lVkr6ENifNOXj431OPLJBOL+/PI8dMas4Eok Sep 4 17:59:41.288526 sshd[5291]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:59:41.299495 systemd-logind[1432]: New session 21 of user core. Sep 4 17:59:41.308563 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 4 17:59:44.564184 update_engine[1433]: I0904 17:59:44.564129 1433 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 4 17:59:44.567309 update_engine[1433]: I0904 17:59:44.567278 1433 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 4 17:59:44.572481 update_engine[1433]: I0904 17:59:44.571939 1433 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 4 17:59:44.582523 update_engine[1433]: E0904 17:59:44.582465 1433 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 4 17:59:44.582523 update_engine[1433]: I0904 17:59:44.582522 1433 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Sep 4 17:59:45.114936 sshd[5291]: pam_unix(sshd:session): session closed for user core Sep 4 17:59:45.130480 systemd[1]: sshd@18-172.24.4.18:22-172.24.4.1:35434.service: Deactivated successfully. Sep 4 17:59:45.136387 systemd[1]: session-21.scope: Deactivated successfully. Sep 4 17:59:45.139158 systemd-logind[1432]: Session 21 logged out. Waiting for processes to exit. Sep 4 17:59:45.146836 systemd[1]: Started sshd@19-172.24.4.18:22-172.24.4.1:37110.service - OpenSSH per-connection server daemon (172.24.4.1:37110). Sep 4 17:59:45.161961 systemd-logind[1432]: Removed session 21. Sep 4 17:59:46.567661 sshd[5331]: Accepted publickey for core from 172.24.4.1 port 37110 ssh2: RSA SHA256:JnA7Fh8lVkr6ENifNOXj431OPLJBOL+/PI8dMas4Eok Sep 4 17:59:46.608024 sshd[5331]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:59:46.644117 systemd-logind[1432]: New session 22 of user core. Sep 4 17:59:46.653634 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 4 17:59:49.502552 sshd[5331]: pam_unix(sshd:session): session closed for user core Sep 4 17:59:49.517481 systemd[1]: sshd@19-172.24.4.18:22-172.24.4.1:37110.service: Deactivated successfully. Sep 4 17:59:49.522552 systemd[1]: session-22.scope: Deactivated successfully. Sep 4 17:59:49.527931 systemd-logind[1432]: Session 22 logged out. Waiting for processes to exit. Sep 4 17:59:49.533177 systemd[1]: Started sshd@20-172.24.4.18:22-172.24.4.1:37124.service - OpenSSH per-connection server daemon (172.24.4.1:37124). Sep 4 17:59:49.537696 systemd-logind[1432]: Removed session 22. Sep 4 17:59:50.849990 sshd[5353]: Accepted publickey for core from 172.24.4.1 port 37124 ssh2: RSA SHA256:JnA7Fh8lVkr6ENifNOXj431OPLJBOL+/PI8dMas4Eok Sep 4 17:59:50.854491 sshd[5353]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:59:50.867377 systemd-logind[1432]: New session 23 of user core. Sep 4 17:59:50.875605 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 4 17:59:52.064569 sshd[5353]: pam_unix(sshd:session): session closed for user core Sep 4 17:59:52.079206 systemd[1]: sshd@20-172.24.4.18:22-172.24.4.1:37124.service: Deactivated successfully. Sep 4 17:59:52.082661 systemd[1]: session-23.scope: Deactivated successfully. Sep 4 17:59:52.084520 systemd-logind[1432]: Session 23 logged out. Waiting for processes to exit. Sep 4 17:59:52.086978 systemd-logind[1432]: Removed session 23. Sep 4 17:59:54.563914 update_engine[1433]: I0904 17:59:54.562796 1433 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 4 17:59:54.563914 update_engine[1433]: I0904 17:59:54.563341 1433 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 4 17:59:54.563914 update_engine[1433]: I0904 17:59:54.563798 1433 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 4 17:59:54.574149 update_engine[1433]: E0904 17:59:54.574015 1433 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 4 17:59:54.574149 update_engine[1433]: I0904 17:59:54.574106 1433 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Sep 4 17:59:57.095679 systemd[1]: Started sshd@21-172.24.4.18:22-172.24.4.1:60164.service - OpenSSH per-connection server daemon (172.24.4.1:60164). Sep 4 17:59:58.213373 sshd[5372]: Accepted publickey for core from 172.24.4.1 port 60164 ssh2: RSA SHA256:JnA7Fh8lVkr6ENifNOXj431OPLJBOL+/PI8dMas4Eok Sep 4 17:59:58.217661 sshd[5372]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:59:58.231514 systemd-logind[1432]: New session 24 of user core. Sep 4 17:59:58.242657 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 4 17:59:59.098095 sshd[5372]: pam_unix(sshd:session): session closed for user core Sep 4 17:59:59.104680 systemd[1]: sshd@21-172.24.4.18:22-172.24.4.1:60164.service: Deactivated successfully. Sep 4 17:59:59.108036 systemd[1]: session-24.scope: Deactivated successfully. Sep 4 17:59:59.110690 systemd-logind[1432]: Session 24 logged out. Waiting for processes to exit. Sep 4 17:59:59.112916 systemd-logind[1432]: Removed session 24. Sep 4 18:00:04.118799 systemd[1]: Started sshd@22-172.24.4.18:22-172.24.4.1:60180.service - OpenSSH per-connection server daemon (172.24.4.1:60180). Sep 4 18:00:04.564848 update_engine[1433]: I0904 18:00:04.564337 1433 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 4 18:00:04.564848 update_engine[1433]: I0904 18:00:04.564747 1433 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 4 18:00:04.567417 update_engine[1433]: I0904 18:00:04.565135 1433 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 4 18:00:04.575614 update_engine[1433]: E0904 18:00:04.575548 1433 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 4 18:00:04.575802 update_engine[1433]: I0904 18:00:04.575644 1433 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 4 18:00:04.575802 update_engine[1433]: I0904 18:00:04.575657 1433 omaha_request_action.cc:617] Omaha request response: Sep 4 18:00:04.575802 update_engine[1433]: E0904 18:00:04.575801 1433 omaha_request_action.cc:636] Omaha request network transfer failed. Sep 4 18:00:04.576726 update_engine[1433]: I0904 18:00:04.575840 1433 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Sep 4 18:00:04.576726 update_engine[1433]: I0904 18:00:04.575851 1433 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 4 18:00:04.576726 update_engine[1433]: I0904 18:00:04.575860 1433 update_attempter.cc:306] Processing Done. Sep 4 18:00:04.576726 update_engine[1433]: E0904 18:00:04.575881 1433 update_attempter.cc:619] Update failed. Sep 4 18:00:04.576726 update_engine[1433]: I0904 18:00:04.575888 1433 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Sep 4 18:00:04.576726 update_engine[1433]: I0904 18:00:04.575896 1433 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Sep 4 18:00:04.576726 update_engine[1433]: I0904 18:00:04.575904 1433 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Sep 4 18:00:04.576726 update_engine[1433]: I0904 18:00:04.576051 1433 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 4 18:00:04.577761 update_engine[1433]: I0904 18:00:04.576788 1433 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 4 18:00:04.577761 update_engine[1433]: I0904 18:00:04.576810 1433 omaha_request_action.cc:272] Request: Sep 4 18:00:04.577761 update_engine[1433]: Sep 4 18:00:04.577761 update_engine[1433]: Sep 4 18:00:04.577761 update_engine[1433]: Sep 4 18:00:04.577761 update_engine[1433]: Sep 4 18:00:04.577761 update_engine[1433]: Sep 4 18:00:04.577761 update_engine[1433]: Sep 4 18:00:04.577761 update_engine[1433]: I0904 18:00:04.576821 1433 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 4 18:00:04.577761 update_engine[1433]: I0904 18:00:04.577106 1433 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 4 18:00:04.578670 update_engine[1433]: I0904 18:00:04.578634 1433 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 4 18:00:04.579122 locksmithd[1457]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Sep 4 18:00:04.588810 update_engine[1433]: E0904 18:00:04.588670 1433 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 4 18:00:04.588810 update_engine[1433]: I0904 18:00:04.588778 1433 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 4 18:00:04.588810 update_engine[1433]: I0904 18:00:04.588790 1433 omaha_request_action.cc:617] Omaha request response: Sep 4 18:00:04.588810 update_engine[1433]: I0904 18:00:04.588804 1433 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 4 18:00:04.588810 update_engine[1433]: I0904 18:00:04.588811 1433 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 4 18:00:04.588810 update_engine[1433]: I0904 18:00:04.588817 1433 update_attempter.cc:306] Processing Done. Sep 4 18:00:04.588810 update_engine[1433]: I0904 18:00:04.588826 1433 update_attempter.cc:310] Error event sent. Sep 4 18:00:04.590073 update_engine[1433]: I0904 18:00:04.588840 1433 update_check_scheduler.cc:74] Next update check in 46m25s Sep 4 18:00:04.590148 locksmithd[1457]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Sep 4 18:00:05.503782 sshd[5404]: Accepted publickey for core from 172.24.4.1 port 60180 ssh2: RSA SHA256:JnA7Fh8lVkr6ENifNOXj431OPLJBOL+/PI8dMas4Eok Sep 4 18:00:05.506207 sshd[5404]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 18:00:05.513400 systemd-logind[1432]: New session 25 of user core. Sep 4 18:00:05.519484 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 4 18:00:06.282494 sshd[5404]: pam_unix(sshd:session): session closed for user core Sep 4 18:00:06.288100 systemd[1]: sshd@22-172.24.4.18:22-172.24.4.1:60180.service: Deactivated successfully. Sep 4 18:00:06.293779 systemd[1]: session-25.scope: Deactivated successfully. Sep 4 18:00:06.298544 systemd-logind[1432]: Session 25 logged out. Waiting for processes to exit. Sep 4 18:00:06.300938 systemd-logind[1432]: Removed session 25. Sep 4 18:00:11.315471 systemd[1]: Started sshd@23-172.24.4.18:22-172.24.4.1:58406.service - OpenSSH per-connection server daemon (172.24.4.1:58406). Sep 4 18:00:12.478883 sshd[5444]: Accepted publickey for core from 172.24.4.1 port 58406 ssh2: RSA SHA256:JnA7Fh8lVkr6ENifNOXj431OPLJBOL+/PI8dMas4Eok Sep 4 18:00:12.482157 sshd[5444]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 18:00:12.495741 systemd-logind[1432]: New session 26 of user core. Sep 4 18:00:12.505775 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 4 18:00:13.406718 sshd[5444]: pam_unix(sshd:session): session closed for user core Sep 4 18:00:13.412947 systemd[1]: sshd@23-172.24.4.18:22-172.24.4.1:58406.service: Deactivated successfully. Sep 4 18:00:13.418637 systemd[1]: session-26.scope: Deactivated successfully. Sep 4 18:00:13.422927 systemd-logind[1432]: Session 26 logged out. Waiting for processes to exit. Sep 4 18:00:13.426523 systemd-logind[1432]: Removed session 26. Sep 4 18:00:18.429847 systemd[1]: Started sshd@24-172.24.4.18:22-172.24.4.1:37022.service - OpenSSH per-connection server daemon (172.24.4.1:37022). Sep 4 18:00:19.776017 sshd[5477]: Accepted publickey for core from 172.24.4.1 port 37022 ssh2: RSA SHA256:JnA7Fh8lVkr6ENifNOXj431OPLJBOL+/PI8dMas4Eok Sep 4 18:00:19.779166 sshd[5477]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 18:00:19.791851 systemd-logind[1432]: New session 27 of user core. Sep 4 18:00:19.797543 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 4 18:00:21.017091 sshd[5477]: pam_unix(sshd:session): session closed for user core Sep 4 18:00:21.027868 systemd[1]: sshd@24-172.24.4.18:22-172.24.4.1:37022.service: Deactivated successfully. Sep 4 18:00:21.037445 systemd[1]: session-27.scope: Deactivated successfully. Sep 4 18:00:21.045718 systemd-logind[1432]: Session 27 logged out. Waiting for processes to exit. Sep 4 18:00:21.048751 systemd-logind[1432]: Removed session 27. Sep 4 18:00:26.042898 systemd[1]: Started sshd@25-172.24.4.18:22-172.24.4.1:37940.service - OpenSSH per-connection server daemon (172.24.4.1:37940). Sep 4 18:00:27.298838 sshd[5497]: Accepted publickey for core from 172.24.4.1 port 37940 ssh2: RSA SHA256:JnA7Fh8lVkr6ENifNOXj431OPLJBOL+/PI8dMas4Eok Sep 4 18:00:27.305607 sshd[5497]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 18:00:27.320785 systemd-logind[1432]: New session 28 of user core. Sep 4 18:00:27.326458 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 4 18:00:28.168071 sshd[5497]: pam_unix(sshd:session): session closed for user core Sep 4 18:00:28.175557 systemd[1]: sshd@25-172.24.4.18:22-172.24.4.1:37940.service: Deactivated successfully. Sep 4 18:00:28.182603 systemd[1]: session-28.scope: Deactivated successfully. Sep 4 18:00:28.187637 systemd-logind[1432]: Session 28 logged out. Waiting for processes to exit. Sep 4 18:00:28.190103 systemd-logind[1432]: Removed session 28.