Sep 4 18:06:20.942541 kernel: Linux version 6.6.48-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Sep 4 15:54:07 -00 2024 Sep 4 18:06:20.942568 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=ceda2dd706627da8006bcd6ae77ea155b2a7de6732e2c1c7ab4bed271400663d Sep 4 18:06:20.942581 kernel: BIOS-provided physical RAM map: Sep 4 18:06:20.942589 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 4 18:06:20.942597 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 4 18:06:20.942604 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 4 18:06:20.942614 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Sep 4 18:06:20.942622 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Sep 4 18:06:20.942630 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 4 18:06:20.942640 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 4 18:06:20.942648 kernel: NX (Execute Disable) protection: active Sep 4 18:06:20.942656 kernel: APIC: Static calls initialized Sep 4 18:06:20.942664 kernel: SMBIOS 2.8 present. Sep 4 18:06:20.942672 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014 Sep 4 18:06:20.942682 kernel: Hypervisor detected: KVM Sep 4 18:06:20.942692 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 4 18:06:20.942700 kernel: kvm-clock: using sched offset of 3993234759 cycles Sep 4 18:06:20.942710 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 4 18:06:20.942718 kernel: tsc: Detected 1996.249 MHz processor Sep 4 18:06:20.942727 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 4 18:06:20.942736 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 4 18:06:20.942745 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Sep 4 18:06:20.942754 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 4 18:06:20.942762 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 4 18:06:20.942773 kernel: ACPI: Early table checksum verification disabled Sep 4 18:06:20.942782 kernel: ACPI: RSDP 0x00000000000F5930 000014 (v00 BOCHS ) Sep 4 18:06:20.942790 kernel: ACPI: RSDT 0x000000007FFE1848 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 18:06:20.942799 kernel: ACPI: FACP 0x000000007FFE172C 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 18:06:20.942808 kernel: ACPI: DSDT 0x000000007FFE0040 0016EC (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 18:06:20.942816 kernel: ACPI: FACS 0x000000007FFE0000 000040 Sep 4 18:06:20.942825 kernel: ACPI: APIC 0x000000007FFE17A0 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 18:06:20.942834 kernel: ACPI: WAET 0x000000007FFE1820 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 18:06:20.942842 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe172c-0x7ffe179f] Sep 4 18:06:20.942853 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe172b] Sep 4 18:06:20.942861 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Sep 4 18:06:20.942870 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17a0-0x7ffe181f] Sep 4 18:06:20.942878 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe1820-0x7ffe1847] Sep 4 18:06:20.942887 kernel: No NUMA configuration found Sep 4 18:06:20.942895 kernel: Faking a node at [mem 0x0000000000000000-0x000000007ffdcfff] Sep 4 18:06:20.942904 kernel: NODE_DATA(0) allocated [mem 0x7ffd7000-0x7ffdcfff] Sep 4 18:06:20.942916 kernel: Zone ranges: Sep 4 18:06:20.942927 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 4 18:06:20.942936 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdcfff] Sep 4 18:06:20.942945 kernel: Normal empty Sep 4 18:06:20.942954 kernel: Movable zone start for each node Sep 4 18:06:20.942962 kernel: Early memory node ranges Sep 4 18:06:20.942971 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 4 18:06:20.942982 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Sep 4 18:06:20.942991 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdcfff] Sep 4 18:06:20.943000 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 4 18:06:20.943009 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 4 18:06:20.943018 kernel: On node 0, zone DMA32: 35 pages in unavailable ranges Sep 4 18:06:20.943027 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 4 18:06:20.943036 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 4 18:06:20.943045 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 4 18:06:20.943054 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 4 18:06:20.943065 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 4 18:06:20.943074 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 4 18:06:20.943083 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 4 18:06:20.943092 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 4 18:06:20.943101 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 4 18:06:20.943110 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Sep 4 18:06:20.943119 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 4 18:06:20.943128 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Sep 4 18:06:20.943137 kernel: Booting paravirtualized kernel on KVM Sep 4 18:06:20.943146 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 4 18:06:20.943157 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 4 18:06:20.943166 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576 Sep 4 18:06:20.943175 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152 Sep 4 18:06:20.943184 kernel: pcpu-alloc: [0] 0 1 Sep 4 18:06:20.943193 kernel: kvm-guest: PV spinlocks disabled, no host support Sep 4 18:06:20.943204 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=ceda2dd706627da8006bcd6ae77ea155b2a7de6732e2c1c7ab4bed271400663d Sep 4 18:06:20.943213 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 4 18:06:20.943225 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 4 18:06:20.943234 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 4 18:06:20.943243 kernel: Fallback order for Node 0: 0 Sep 4 18:06:20.943252 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515805 Sep 4 18:06:20.943260 kernel: Policy zone: DMA32 Sep 4 18:06:20.943269 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 4 18:06:20.943279 kernel: Memory: 1971212K/2096620K available (12288K kernel code, 2304K rwdata, 22708K rodata, 42704K init, 2488K bss, 125148K reserved, 0K cma-reserved) Sep 4 18:06:20.943288 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 4 18:06:20.943297 kernel: ftrace: allocating 37748 entries in 148 pages Sep 4 18:06:20.943308 kernel: ftrace: allocated 148 pages with 3 groups Sep 4 18:06:20.943342 kernel: Dynamic Preempt: voluntary Sep 4 18:06:20.943352 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 4 18:06:20.943362 kernel: rcu: RCU event tracing is enabled. Sep 4 18:06:20.943371 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 4 18:06:20.943380 kernel: Trampoline variant of Tasks RCU enabled. Sep 4 18:06:20.943389 kernel: Rude variant of Tasks RCU enabled. Sep 4 18:06:20.943398 kernel: Tracing variant of Tasks RCU enabled. Sep 4 18:06:20.943407 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 4 18:06:20.943419 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 4 18:06:20.943428 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Sep 4 18:06:20.943437 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 4 18:06:20.943446 kernel: Console: colour VGA+ 80x25 Sep 4 18:06:20.943455 kernel: printk: console [tty0] enabled Sep 4 18:06:20.943464 kernel: printk: console [ttyS0] enabled Sep 4 18:06:20.943473 kernel: ACPI: Core revision 20230628 Sep 4 18:06:20.943482 kernel: APIC: Switch to symmetric I/O mode setup Sep 4 18:06:20.943491 kernel: x2apic enabled Sep 4 18:06:20.943500 kernel: APIC: Switched APIC routing to: physical x2apic Sep 4 18:06:20.943511 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 4 18:06:20.943520 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Sep 4 18:06:20.943529 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Sep 4 18:06:20.943538 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Sep 4 18:06:20.943548 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Sep 4 18:06:20.943557 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 4 18:06:20.943566 kernel: Spectre V2 : Mitigation: Retpolines Sep 4 18:06:20.943575 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Sep 4 18:06:20.943584 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Sep 4 18:06:20.943595 kernel: Speculative Store Bypass: Vulnerable Sep 4 18:06:20.943604 kernel: x86/fpu: x87 FPU will use FXSAVE Sep 4 18:06:20.943613 kernel: Freeing SMP alternatives memory: 32K Sep 4 18:06:20.943622 kernel: pid_max: default: 32768 minimum: 301 Sep 4 18:06:20.943631 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 4 18:06:20.943640 kernel: landlock: Up and running. Sep 4 18:06:20.943649 kernel: SELinux: Initializing. Sep 4 18:06:20.943658 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 4 18:06:20.943676 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 4 18:06:20.943686 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Sep 4 18:06:20.943695 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Sep 4 18:06:20.943707 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Sep 4 18:06:20.943716 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Sep 4 18:06:20.943725 kernel: Performance Events: AMD PMU driver. Sep 4 18:06:20.943735 kernel: ... version: 0 Sep 4 18:06:20.943744 kernel: ... bit width: 48 Sep 4 18:06:20.943756 kernel: ... generic registers: 4 Sep 4 18:06:20.943765 kernel: ... value mask: 0000ffffffffffff Sep 4 18:06:20.943774 kernel: ... max period: 00007fffffffffff Sep 4 18:06:20.943784 kernel: ... fixed-purpose events: 0 Sep 4 18:06:20.943793 kernel: ... event mask: 000000000000000f Sep 4 18:06:20.943802 kernel: signal: max sigframe size: 1440 Sep 4 18:06:20.943812 kernel: rcu: Hierarchical SRCU implementation. Sep 4 18:06:20.943821 kernel: rcu: Max phase no-delay instances is 400. Sep 4 18:06:20.943831 kernel: smp: Bringing up secondary CPUs ... Sep 4 18:06:20.943840 kernel: smpboot: x86: Booting SMP configuration: Sep 4 18:06:20.943851 kernel: .... node #0, CPUs: #1 Sep 4 18:06:20.943861 kernel: smp: Brought up 1 node, 2 CPUs Sep 4 18:06:20.943870 kernel: smpboot: Max logical packages: 2 Sep 4 18:06:20.943879 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Sep 4 18:06:20.943889 kernel: devtmpfs: initialized Sep 4 18:06:20.943898 kernel: x86/mm: Memory block size: 128MB Sep 4 18:06:20.943908 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 4 18:06:20.943917 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 4 18:06:20.943927 kernel: pinctrl core: initialized pinctrl subsystem Sep 4 18:06:20.943939 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 4 18:06:20.943948 kernel: audit: initializing netlink subsys (disabled) Sep 4 18:06:20.943957 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 4 18:06:20.943967 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 4 18:06:20.943976 kernel: audit: type=2000 audit(1725473180.329:1): state=initialized audit_enabled=0 res=1 Sep 4 18:06:20.943986 kernel: cpuidle: using governor menu Sep 4 18:06:20.943995 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 4 18:06:20.944005 kernel: dca service started, version 1.12.1 Sep 4 18:06:20.944014 kernel: PCI: Using configuration type 1 for base access Sep 4 18:06:20.944026 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 4 18:06:20.944035 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 4 18:06:20.944045 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 4 18:06:20.944054 kernel: ACPI: Added _OSI(Module Device) Sep 4 18:06:20.944063 kernel: ACPI: Added _OSI(Processor Device) Sep 4 18:06:20.944073 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Sep 4 18:06:20.944082 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 4 18:06:20.944091 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 4 18:06:20.944101 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 4 18:06:20.944112 kernel: ACPI: Interpreter enabled Sep 4 18:06:20.944121 kernel: ACPI: PM: (supports S0 S3 S5) Sep 4 18:06:20.944131 kernel: ACPI: Using IOAPIC for interrupt routing Sep 4 18:06:20.944140 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 4 18:06:20.944150 kernel: PCI: Using E820 reservations for host bridge windows Sep 4 18:06:20.944159 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Sep 4 18:06:20.944169 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 4 18:06:20.944393 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Sep 4 18:06:20.944513 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Sep 4 18:06:20.944615 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Sep 4 18:06:20.944630 kernel: acpiphp: Slot [3] registered Sep 4 18:06:20.944640 kernel: acpiphp: Slot [4] registered Sep 4 18:06:20.944649 kernel: acpiphp: Slot [5] registered Sep 4 18:06:20.944659 kernel: acpiphp: Slot [6] registered Sep 4 18:06:20.944668 kernel: acpiphp: Slot [7] registered Sep 4 18:06:20.944677 kernel: acpiphp: Slot [8] registered Sep 4 18:06:20.944690 kernel: acpiphp: Slot [9] registered Sep 4 18:06:20.944700 kernel: acpiphp: Slot [10] registered Sep 4 18:06:20.944709 kernel: acpiphp: Slot [11] registered Sep 4 18:06:20.944719 kernel: acpiphp: Slot [12] registered Sep 4 18:06:20.944728 kernel: acpiphp: Slot [13] registered Sep 4 18:06:20.944737 kernel: acpiphp: Slot [14] registered Sep 4 18:06:20.944747 kernel: acpiphp: Slot [15] registered Sep 4 18:06:20.944756 kernel: acpiphp: Slot [16] registered Sep 4 18:06:20.944780 kernel: acpiphp: Slot [17] registered Sep 4 18:06:20.944790 kernel: acpiphp: Slot [18] registered Sep 4 18:06:20.944802 kernel: acpiphp: Slot [19] registered Sep 4 18:06:20.944811 kernel: acpiphp: Slot [20] registered Sep 4 18:06:20.944821 kernel: acpiphp: Slot [21] registered Sep 4 18:06:20.944830 kernel: acpiphp: Slot [22] registered Sep 4 18:06:20.944839 kernel: acpiphp: Slot [23] registered Sep 4 18:06:20.944849 kernel: acpiphp: Slot [24] registered Sep 4 18:06:20.944858 kernel: acpiphp: Slot [25] registered Sep 4 18:06:20.944867 kernel: acpiphp: Slot [26] registered Sep 4 18:06:20.944877 kernel: acpiphp: Slot [27] registered Sep 4 18:06:20.944888 kernel: acpiphp: Slot [28] registered Sep 4 18:06:20.944897 kernel: acpiphp: Slot [29] registered Sep 4 18:06:20.944907 kernel: acpiphp: Slot [30] registered Sep 4 18:06:20.944916 kernel: acpiphp: Slot [31] registered Sep 4 18:06:20.944926 kernel: PCI host bridge to bus 0000:00 Sep 4 18:06:20.945034 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 4 18:06:20.945125 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 4 18:06:20.945213 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 4 18:06:20.945305 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Sep 4 18:06:20.945424 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Sep 4 18:06:20.945521 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 4 18:06:20.945639 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Sep 4 18:06:20.945756 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Sep 4 18:06:20.945866 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Sep 4 18:06:20.945972 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Sep 4 18:06:20.946070 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Sep 4 18:06:20.946166 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Sep 4 18:06:20.946263 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Sep 4 18:06:20.949075 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Sep 4 18:06:20.949207 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Sep 4 18:06:20.949310 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Sep 4 18:06:20.949445 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Sep 4 18:06:20.949557 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Sep 4 18:06:20.949662 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Sep 4 18:06:20.949761 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Sep 4 18:06:20.949859 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Sep 4 18:06:20.949958 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Sep 4 18:06:20.950054 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 4 18:06:20.950167 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Sep 4 18:06:20.950267 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Sep 4 18:06:20.952480 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Sep 4 18:06:20.952600 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Sep 4 18:06:20.952706 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Sep 4 18:06:20.952837 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Sep 4 18:06:20.952957 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Sep 4 18:06:20.953093 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Sep 4 18:06:20.953214 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Sep 4 18:06:20.953463 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Sep 4 18:06:20.953584 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Sep 4 18:06:20.953681 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Sep 4 18:06:20.953788 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Sep 4 18:06:20.953894 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Sep 4 18:06:20.953991 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Sep 4 18:06:20.954007 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 4 18:06:20.954017 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 4 18:06:20.954027 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 4 18:06:20.954037 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 4 18:06:20.954047 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Sep 4 18:06:20.954057 kernel: iommu: Default domain type: Translated Sep 4 18:06:20.954067 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 4 18:06:20.954081 kernel: PCI: Using ACPI for IRQ routing Sep 4 18:06:20.954090 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 4 18:06:20.954100 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 4 18:06:20.954110 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Sep 4 18:06:20.954207 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Sep 4 18:06:20.954306 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Sep 4 18:06:20.955459 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 4 18:06:20.955477 kernel: vgaarb: loaded Sep 4 18:06:20.955486 kernel: clocksource: Switched to clocksource kvm-clock Sep 4 18:06:20.955501 kernel: VFS: Disk quotas dquot_6.6.0 Sep 4 18:06:20.955510 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 4 18:06:20.955520 kernel: pnp: PnP ACPI init Sep 4 18:06:20.955621 kernel: pnp 00:03: [dma 2] Sep 4 18:06:20.955637 kernel: pnp: PnP ACPI: found 5 devices Sep 4 18:06:20.955647 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 4 18:06:20.955656 kernel: NET: Registered PF_INET protocol family Sep 4 18:06:20.955665 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 4 18:06:20.955679 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 4 18:06:20.955688 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 4 18:06:20.955697 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 4 18:06:20.955706 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 4 18:06:20.955716 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 4 18:06:20.955725 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 4 18:06:20.955734 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 4 18:06:20.955743 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 4 18:06:20.955752 kernel: NET: Registered PF_XDP protocol family Sep 4 18:06:20.955843 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 4 18:06:20.955925 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 4 18:06:20.956004 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 4 18:06:20.956085 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Sep 4 18:06:20.956171 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Sep 4 18:06:20.956273 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Sep 4 18:06:20.957508 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 4 18:06:20.957529 kernel: PCI: CLS 0 bytes, default 64 Sep 4 18:06:20.957546 kernel: Initialise system trusted keyrings Sep 4 18:06:20.957556 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 4 18:06:20.957566 kernel: Key type asymmetric registered Sep 4 18:06:20.957576 kernel: Asymmetric key parser 'x509' registered Sep 4 18:06:20.957585 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 4 18:06:20.957595 kernel: io scheduler mq-deadline registered Sep 4 18:06:20.957605 kernel: io scheduler kyber registered Sep 4 18:06:20.957615 kernel: io scheduler bfq registered Sep 4 18:06:20.957625 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 4 18:06:20.957638 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Sep 4 18:06:20.957648 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Sep 4 18:06:20.957658 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Sep 4 18:06:20.957668 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Sep 4 18:06:20.957677 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 4 18:06:20.957687 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 4 18:06:20.957697 kernel: random: crng init done Sep 4 18:06:20.957707 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 4 18:06:20.957717 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 4 18:06:20.957730 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 4 18:06:20.957835 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 4 18:06:20.957852 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 4 18:06:20.957939 kernel: rtc_cmos 00:04: registered as rtc0 Sep 4 18:06:20.958029 kernel: rtc_cmos 00:04: setting system clock to 2024-09-04T18:06:20 UTC (1725473180) Sep 4 18:06:20.958116 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Sep 4 18:06:20.958131 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 4 18:06:20.958145 kernel: NET: Registered PF_INET6 protocol family Sep 4 18:06:20.958155 kernel: Segment Routing with IPv6 Sep 4 18:06:20.958165 kernel: In-situ OAM (IOAM) with IPv6 Sep 4 18:06:20.958174 kernel: NET: Registered PF_PACKET protocol family Sep 4 18:06:20.958184 kernel: Key type dns_resolver registered Sep 4 18:06:20.958194 kernel: IPI shorthand broadcast: enabled Sep 4 18:06:20.958204 kernel: sched_clock: Marking stable (948008684, 127180505)->(1079896455, -4707266) Sep 4 18:06:20.958213 kernel: registered taskstats version 1 Sep 4 18:06:20.958223 kernel: Loading compiled-in X.509 certificates Sep 4 18:06:20.958233 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.48-flatcar: 8669771ab5e11f458b79e6634fe685dacc266b18' Sep 4 18:06:20.958245 kernel: Key type .fscrypt registered Sep 4 18:06:20.958254 kernel: Key type fscrypt-provisioning registered Sep 4 18:06:20.958264 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 4 18:06:20.958273 kernel: ima: Allocated hash algorithm: sha1 Sep 4 18:06:20.958283 kernel: ima: No architecture policies found Sep 4 18:06:20.958292 kernel: clk: Disabling unused clocks Sep 4 18:06:20.958303 kernel: Freeing unused kernel image (initmem) memory: 42704K Sep 4 18:06:20.961354 kernel: Write protecting the kernel read-only data: 36864k Sep 4 18:06:20.961378 kernel: Freeing unused kernel image (rodata/data gap) memory: 1868K Sep 4 18:06:20.961388 kernel: Run /init as init process Sep 4 18:06:20.961398 kernel: with arguments: Sep 4 18:06:20.961409 kernel: /init Sep 4 18:06:20.961418 kernel: with environment: Sep 4 18:06:20.961428 kernel: HOME=/ Sep 4 18:06:20.961438 kernel: TERM=linux Sep 4 18:06:20.961448 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 4 18:06:20.961462 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 4 18:06:20.961477 systemd[1]: Detected virtualization kvm. Sep 4 18:06:20.961488 systemd[1]: Detected architecture x86-64. Sep 4 18:06:20.961499 systemd[1]: Running in initrd. Sep 4 18:06:20.961509 systemd[1]: No hostname configured, using default hostname. Sep 4 18:06:20.961519 systemd[1]: Hostname set to . Sep 4 18:06:20.961530 systemd[1]: Initializing machine ID from VM UUID. Sep 4 18:06:20.961541 systemd[1]: Queued start job for default target initrd.target. Sep 4 18:06:20.961554 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 18:06:20.961564 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 18:06:20.961576 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 4 18:06:20.961587 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 18:06:20.961597 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 4 18:06:20.961608 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 4 18:06:20.961620 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 4 18:06:20.961634 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 4 18:06:20.961644 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 18:06:20.961655 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 18:06:20.961665 systemd[1]: Reached target paths.target - Path Units. Sep 4 18:06:20.961686 systemd[1]: Reached target slices.target - Slice Units. Sep 4 18:06:20.961699 systemd[1]: Reached target swap.target - Swaps. Sep 4 18:06:20.961712 systemd[1]: Reached target timers.target - Timer Units. Sep 4 18:06:20.961723 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 18:06:20.961733 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 18:06:20.961744 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 4 18:06:20.961755 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 4 18:06:20.961766 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 18:06:20.961777 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 18:06:20.961787 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 18:06:20.961800 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 18:06:20.961811 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 4 18:06:20.961822 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 18:06:20.961835 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 4 18:06:20.961845 systemd[1]: Starting systemd-fsck-usr.service... Sep 4 18:06:20.961856 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 18:06:20.961867 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 18:06:20.961877 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 18:06:20.961888 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 4 18:06:20.961936 systemd-journald[185]: Collecting audit messages is disabled. Sep 4 18:06:20.961963 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 18:06:20.961974 systemd[1]: Finished systemd-fsck-usr.service. Sep 4 18:06:20.961990 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 18:06:20.962001 systemd-journald[185]: Journal started Sep 4 18:06:20.962025 systemd-journald[185]: Runtime Journal (/run/log/journal/7f8af4de858b4b94be67eeca1b00e739) is 4.9M, max 39.3M, 34.4M free. Sep 4 18:06:20.951103 systemd-modules-load[186]: Inserted module 'overlay' Sep 4 18:06:20.971368 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 18:06:20.983440 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 18:06:21.020993 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 4 18:06:21.021064 kernel: Bridge firewalling registered Sep 4 18:06:20.996804 systemd-modules-load[186]: Inserted module 'br_netfilter' Sep 4 18:06:21.022534 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 18:06:21.032536 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 18:06:21.033243 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 18:06:21.041668 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 18:06:21.044722 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 18:06:21.046449 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 18:06:21.048407 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 18:06:21.066236 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 18:06:21.067063 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 18:06:21.068698 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 18:06:21.075551 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 4 18:06:21.077436 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 18:06:21.087079 dracut-cmdline[218]: dracut-dracut-053 Sep 4 18:06:21.093228 dracut-cmdline[218]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=ceda2dd706627da8006bcd6ae77ea155b2a7de6732e2c1c7ab4bed271400663d Sep 4 18:06:21.117742 systemd-resolved[220]: Positive Trust Anchors: Sep 4 18:06:21.117774 systemd-resolved[220]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 18:06:21.117817 systemd-resolved[220]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 18:06:21.120743 systemd-resolved[220]: Defaulting to hostname 'linux'. Sep 4 18:06:21.122086 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 18:06:21.123037 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 18:06:21.186456 kernel: SCSI subsystem initialized Sep 4 18:06:21.197359 kernel: Loading iSCSI transport class v2.0-870. Sep 4 18:06:21.209984 kernel: iscsi: registered transport (tcp) Sep 4 18:06:21.233573 kernel: iscsi: registered transport (qla4xxx) Sep 4 18:06:21.233644 kernel: QLogic iSCSI HBA Driver Sep 4 18:06:21.297380 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 4 18:06:21.306627 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 4 18:06:21.362994 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 4 18:06:21.363674 kernel: device-mapper: uevent: version 1.0.3 Sep 4 18:06:21.363710 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 4 18:06:21.413376 kernel: raid6: sse2x4 gen() 12405 MB/s Sep 4 18:06:21.430369 kernel: raid6: sse2x2 gen() 14056 MB/s Sep 4 18:06:21.447444 kernel: raid6: sse2x1 gen() 9279 MB/s Sep 4 18:06:21.447528 kernel: raid6: using algorithm sse2x2 gen() 14056 MB/s Sep 4 18:06:21.465611 kernel: raid6: .... xor() 8628 MB/s, rmw enabled Sep 4 18:06:21.465689 kernel: raid6: using ssse3x2 recovery algorithm Sep 4 18:06:21.520494 kernel: xor: measuring software checksum speed Sep 4 18:06:21.524435 kernel: prefetch64-sse : 7173 MB/sec Sep 4 18:06:21.528065 kernel: generic_sse : 6762 MB/sec Sep 4 18:06:21.528168 kernel: xor: using function: prefetch64-sse (7173 MB/sec) Sep 4 18:06:21.719385 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 4 18:06:21.736964 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 4 18:06:21.743529 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 18:06:21.776629 systemd-udevd[403]: Using default interface naming scheme 'v255'. Sep 4 18:06:21.781729 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 18:06:21.795695 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 4 18:06:21.817071 dracut-pre-trigger[417]: rd.md=0: removing MD RAID activation Sep 4 18:06:21.867393 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 18:06:21.876603 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 18:06:21.946245 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 18:06:21.952500 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 4 18:06:21.998400 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 4 18:06:22.002031 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 18:06:22.003630 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 18:06:22.004964 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 18:06:22.010642 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 4 18:06:22.027518 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 4 18:06:22.046364 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues Sep 4 18:06:22.056410 kernel: virtio_blk virtio2: [vda] 41943040 512-byte logical blocks (21.5 GB/20.0 GiB) Sep 4 18:06:22.074140 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 4 18:06:22.074283 kernel: GPT:17805311 != 41943039 Sep 4 18:06:22.074298 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 4 18:06:22.074311 kernel: GPT:17805311 != 41943039 Sep 4 18:06:22.074357 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 4 18:06:22.074370 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 18:06:22.073956 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 18:06:22.074083 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 18:06:22.078563 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 18:06:22.079104 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 18:06:22.079230 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 18:06:22.079742 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 18:06:22.089624 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 18:06:22.093338 kernel: libata version 3.00 loaded. Sep 4 18:06:22.101374 kernel: ata_piix 0000:00:01.1: version 2.13 Sep 4 18:06:22.103712 kernel: scsi host0: ata_piix Sep 4 18:06:22.103960 kernel: scsi host1: ata_piix Sep 4 18:06:22.104362 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Sep 4 18:06:22.107152 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Sep 4 18:06:22.120363 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (463) Sep 4 18:06:22.141337 kernel: BTRFS: device fsid 0dc40443-7f77-4fa7-b5e4-579d4bba0772 devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (460) Sep 4 18:06:22.147052 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 4 18:06:22.181660 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 18:06:22.189794 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 4 18:06:22.201389 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 4 18:06:22.206172 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 4 18:06:22.206839 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 4 18:06:22.212498 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 4 18:06:22.215503 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 18:06:22.226998 disk-uuid[500]: Primary Header is updated. Sep 4 18:06:22.226998 disk-uuid[500]: Secondary Entries is updated. Sep 4 18:06:22.226998 disk-uuid[500]: Secondary Header is updated. Sep 4 18:06:22.234899 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 18:06:22.235498 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 18:06:22.240338 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 18:06:23.254431 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 18:06:23.255734 disk-uuid[507]: The operation has completed successfully. Sep 4 18:06:23.330979 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 4 18:06:23.331230 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 4 18:06:23.362457 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 4 18:06:23.372710 sh[523]: Success Sep 4 18:06:23.399498 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Sep 4 18:06:23.482183 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 4 18:06:23.500581 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 4 18:06:23.508808 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 4 18:06:23.534783 kernel: BTRFS info (device dm-0): first mount of filesystem 0dc40443-7f77-4fa7-b5e4-579d4bba0772 Sep 4 18:06:23.534914 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 4 18:06:23.537351 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 4 18:06:23.537419 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 4 18:06:23.539436 kernel: BTRFS info (device dm-0): using free space tree Sep 4 18:06:23.553602 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 4 18:06:23.556509 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 4 18:06:23.561678 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 4 18:06:23.565643 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 4 18:06:23.576419 kernel: BTRFS info (device vda6): first mount of filesystem b2463ce1-c756-4e78-b7f2-401dad24571d Sep 4 18:06:23.576470 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 18:06:23.576484 kernel: BTRFS info (device vda6): using free space tree Sep 4 18:06:23.581335 kernel: BTRFS info (device vda6): auto enabling async discard Sep 4 18:06:23.599866 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 4 18:06:23.604367 kernel: BTRFS info (device vda6): last unmount of filesystem b2463ce1-c756-4e78-b7f2-401dad24571d Sep 4 18:06:23.622377 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 4 18:06:23.631764 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 4 18:06:23.679334 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 18:06:23.693580 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 18:06:23.713729 systemd-networkd[705]: lo: Link UP Sep 4 18:06:23.713739 systemd-networkd[705]: lo: Gained carrier Sep 4 18:06:23.715065 systemd-networkd[705]: Enumeration completed Sep 4 18:06:23.715936 systemd-networkd[705]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 18:06:23.715940 systemd-networkd[705]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 18:06:23.716574 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 18:06:23.717505 systemd[1]: Reached target network.target - Network. Sep 4 18:06:23.717619 systemd-networkd[705]: eth0: Link UP Sep 4 18:06:23.717623 systemd-networkd[705]: eth0: Gained carrier Sep 4 18:06:23.717638 systemd-networkd[705]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 18:06:23.745506 systemd-networkd[705]: eth0: DHCPv4 address 172.24.4.134/24, gateway 172.24.4.1 acquired from 172.24.4.1 Sep 4 18:06:23.815750 ignition[631]: Ignition 2.19.0 Sep 4 18:06:23.815765 ignition[631]: Stage: fetch-offline Sep 4 18:06:23.815810 ignition[631]: no configs at "/usr/lib/ignition/base.d" Sep 4 18:06:23.815821 ignition[631]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 4 18:06:23.815934 ignition[631]: parsed url from cmdline: "" Sep 4 18:06:23.818246 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 18:06:23.815939 ignition[631]: no config URL provided Sep 4 18:06:23.815946 ignition[631]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 18:06:23.815955 ignition[631]: no config at "/usr/lib/ignition/user.ign" Sep 4 18:06:23.815961 ignition[631]: failed to fetch config: resource requires networking Sep 4 18:06:23.816184 ignition[631]: Ignition finished successfully Sep 4 18:06:23.828556 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 4 18:06:23.844239 ignition[717]: Ignition 2.19.0 Sep 4 18:06:23.844254 ignition[717]: Stage: fetch Sep 4 18:06:23.844498 ignition[717]: no configs at "/usr/lib/ignition/base.d" Sep 4 18:06:23.844510 ignition[717]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 4 18:06:23.844603 ignition[717]: parsed url from cmdline: "" Sep 4 18:06:23.844607 ignition[717]: no config URL provided Sep 4 18:06:23.844613 ignition[717]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 18:06:23.844621 ignition[717]: no config at "/usr/lib/ignition/user.ign" Sep 4 18:06:23.844729 ignition[717]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Sep 4 18:06:23.844768 ignition[717]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Sep 4 18:06:23.844776 ignition[717]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Sep 4 18:06:24.166577 ignition[717]: GET result: OK Sep 4 18:06:24.166776 ignition[717]: parsing config with SHA512: 5e2706953e97fdcf578b3b8a6644d98fb8883cb7d76fb12afca1ea165cefb07a43ee6d9407871f7e26c5e8aa5ab28c5dba37d272327d7ad9904beafb8fdfac16 Sep 4 18:06:24.176619 unknown[717]: fetched base config from "system" Sep 4 18:06:24.176649 unknown[717]: fetched base config from "system" Sep 4 18:06:24.177753 ignition[717]: fetch: fetch complete Sep 4 18:06:24.176664 unknown[717]: fetched user config from "openstack" Sep 4 18:06:24.177765 ignition[717]: fetch: fetch passed Sep 4 18:06:24.181246 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 4 18:06:24.177856 ignition[717]: Ignition finished successfully Sep 4 18:06:24.191689 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 4 18:06:24.238901 ignition[723]: Ignition 2.19.0 Sep 4 18:06:24.238919 ignition[723]: Stage: kargs Sep 4 18:06:24.239391 ignition[723]: no configs at "/usr/lib/ignition/base.d" Sep 4 18:06:24.239421 ignition[723]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 4 18:06:24.242181 ignition[723]: kargs: kargs passed Sep 4 18:06:24.244097 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 4 18:06:24.242296 ignition[723]: Ignition finished successfully Sep 4 18:06:24.259060 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 4 18:06:24.288069 ignition[729]: Ignition 2.19.0 Sep 4 18:06:24.288089 ignition[729]: Stage: disks Sep 4 18:06:24.288469 ignition[729]: no configs at "/usr/lib/ignition/base.d" Sep 4 18:06:24.292477 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 4 18:06:24.288492 ignition[729]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 4 18:06:24.295132 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 4 18:06:24.290302 ignition[729]: disks: disks passed Sep 4 18:06:24.297757 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 4 18:06:24.290443 ignition[729]: Ignition finished successfully Sep 4 18:06:24.300186 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 18:06:24.302136 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 18:06:24.304562 systemd[1]: Reached target basic.target - Basic System. Sep 4 18:06:24.313606 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 4 18:06:24.357628 systemd-fsck[737]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Sep 4 18:06:24.368257 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 4 18:06:24.381634 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 4 18:06:24.523418 kernel: EXT4-fs (vda9): mounted filesystem bdbe0f61-2675-40b7-b9ae-5653402e9b23 r/w with ordered data mode. Quota mode: none. Sep 4 18:06:24.525797 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 4 18:06:24.528131 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 4 18:06:24.536489 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 18:06:24.540310 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 4 18:06:24.541743 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 4 18:06:24.552542 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Sep 4 18:06:24.555466 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 4 18:06:24.561500 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (745) Sep 4 18:06:24.556528 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 18:06:24.558300 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 4 18:06:24.567588 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 4 18:06:24.569784 kernel: BTRFS info (device vda6): first mount of filesystem b2463ce1-c756-4e78-b7f2-401dad24571d Sep 4 18:06:24.575103 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 18:06:24.575130 kernel: BTRFS info (device vda6): using free space tree Sep 4 18:06:24.607413 kernel: BTRFS info (device vda6): auto enabling async discard Sep 4 18:06:24.615931 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 18:06:24.715499 initrd-setup-root[772]: cut: /sysroot/etc/passwd: No such file or directory Sep 4 18:06:24.726929 initrd-setup-root[780]: cut: /sysroot/etc/group: No such file or directory Sep 4 18:06:24.734765 initrd-setup-root[787]: cut: /sysroot/etc/shadow: No such file or directory Sep 4 18:06:24.743237 initrd-setup-root[794]: cut: /sysroot/etc/gshadow: No such file or directory Sep 4 18:06:24.839649 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 4 18:06:24.844418 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 4 18:06:24.847503 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 4 18:06:24.854990 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 4 18:06:24.857407 kernel: BTRFS info (device vda6): last unmount of filesystem b2463ce1-c756-4e78-b7f2-401dad24571d Sep 4 18:06:24.888360 ignition[861]: INFO : Ignition 2.19.0 Sep 4 18:06:24.888360 ignition[861]: INFO : Stage: mount Sep 4 18:06:24.892418 ignition[861]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 18:06:24.892418 ignition[861]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 4 18:06:24.892418 ignition[861]: INFO : mount: mount passed Sep 4 18:06:24.892418 ignition[861]: INFO : Ignition finished successfully Sep 4 18:06:24.892158 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 4 18:06:24.894684 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 4 18:06:25.280987 systemd-networkd[705]: eth0: Gained IPv6LL Sep 4 18:06:31.821756 coreos-metadata[747]: Sep 04 18:06:31.821 WARN failed to locate config-drive, using the metadata service API instead Sep 4 18:06:31.862250 coreos-metadata[747]: Sep 04 18:06:31.862 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Sep 4 18:06:31.876633 coreos-metadata[747]: Sep 04 18:06:31.876 INFO Fetch successful Sep 4 18:06:31.878073 coreos-metadata[747]: Sep 04 18:06:31.877 INFO wrote hostname ci-4054-1-0-c-4d101ae770.novalocal to /sysroot/etc/hostname Sep 4 18:06:31.880888 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Sep 4 18:06:31.881144 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Sep 4 18:06:31.893549 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 4 18:06:31.926658 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 18:06:31.954389 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (878) Sep 4 18:06:31.960198 kernel: BTRFS info (device vda6): first mount of filesystem b2463ce1-c756-4e78-b7f2-401dad24571d Sep 4 18:06:31.960298 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 18:06:31.963479 kernel: BTRFS info (device vda6): using free space tree Sep 4 18:06:31.973381 kernel: BTRFS info (device vda6): auto enabling async discard Sep 4 18:06:31.980062 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 18:06:32.027930 ignition[896]: INFO : Ignition 2.19.0 Sep 4 18:06:32.027930 ignition[896]: INFO : Stage: files Sep 4 18:06:32.031176 ignition[896]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 18:06:32.031176 ignition[896]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 4 18:06:32.031176 ignition[896]: DEBUG : files: compiled without relabeling support, skipping Sep 4 18:06:32.036938 ignition[896]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 4 18:06:32.036938 ignition[896]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 4 18:06:32.040680 ignition[896]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 4 18:06:32.040680 ignition[896]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 4 18:06:32.040680 ignition[896]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 4 18:06:32.039016 unknown[896]: wrote ssh authorized keys file for user: core Sep 4 18:06:32.047955 ignition[896]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 4 18:06:32.047955 ignition[896]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 4 18:06:32.750252 ignition[896]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 4 18:06:33.078414 ignition[896]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 4 18:06:33.078414 ignition[896]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 4 18:06:33.078414 ignition[896]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 4 18:06:33.078414 ignition[896]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 4 18:06:33.078414 ignition[896]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 4 18:06:33.078414 ignition[896]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 18:06:33.078414 ignition[896]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 18:06:33.078414 ignition[896]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 18:06:33.078414 ignition[896]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 18:06:33.078414 ignition[896]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 18:06:33.078414 ignition[896]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 18:06:33.078414 ignition[896]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Sep 4 18:06:33.090000 ignition[896]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Sep 4 18:06:33.090000 ignition[896]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Sep 4 18:06:33.090000 ignition[896]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Sep 4 18:06:33.597861 ignition[896]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 4 18:06:35.381174 ignition[896]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Sep 4 18:06:35.381174 ignition[896]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 4 18:06:35.384522 ignition[896]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 18:06:35.384522 ignition[896]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 18:06:35.384522 ignition[896]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 4 18:06:35.384522 ignition[896]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 4 18:06:35.384522 ignition[896]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 4 18:06:35.384522 ignition[896]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 4 18:06:35.384522 ignition[896]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 4 18:06:35.384522 ignition[896]: INFO : files: files passed Sep 4 18:06:35.384522 ignition[896]: INFO : Ignition finished successfully Sep 4 18:06:35.385024 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 4 18:06:35.392538 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 4 18:06:35.396475 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 4 18:06:35.400236 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 4 18:06:35.400386 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 4 18:06:35.409755 initrd-setup-root-after-ignition[925]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 18:06:35.409755 initrd-setup-root-after-ignition[925]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 4 18:06:35.411900 initrd-setup-root-after-ignition[929]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 18:06:35.412399 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 18:06:35.413689 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 4 18:06:35.421492 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 4 18:06:35.444503 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 4 18:06:35.444613 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 4 18:06:35.446231 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 4 18:06:35.447201 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 4 18:06:35.448441 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 4 18:06:35.454519 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 4 18:06:35.466646 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 18:06:35.480478 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 4 18:06:35.490369 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 4 18:06:35.491730 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 18:06:35.493037 systemd[1]: Stopped target timers.target - Timer Units. Sep 4 18:06:35.493595 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 4 18:06:35.493724 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 18:06:35.495206 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 4 18:06:35.495901 systemd[1]: Stopped target basic.target - Basic System. Sep 4 18:06:35.497065 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 4 18:06:35.498198 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 18:06:35.499227 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 4 18:06:35.500389 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 4 18:06:35.501593 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 18:06:35.502854 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 4 18:06:35.504150 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 4 18:06:35.505487 systemd[1]: Stopped target swap.target - Swaps. Sep 4 18:06:35.506656 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 4 18:06:35.506789 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 4 18:06:35.508125 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 4 18:06:35.508995 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 18:06:35.510146 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 4 18:06:35.510262 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 18:06:35.511851 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 4 18:06:35.511964 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 4 18:06:35.513034 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 4 18:06:35.513171 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 18:06:35.513964 systemd[1]: ignition-files.service: Deactivated successfully. Sep 4 18:06:35.514135 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 4 18:06:35.521834 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 4 18:06:35.525611 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 4 18:06:35.526208 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 4 18:06:35.526416 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 18:06:35.532047 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 4 18:06:35.532191 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 18:06:35.537082 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 4 18:06:35.537772 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 4 18:06:35.544284 ignition[949]: INFO : Ignition 2.19.0 Sep 4 18:06:35.545392 ignition[949]: INFO : Stage: umount Sep 4 18:06:35.545896 ignition[949]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 18:06:35.545896 ignition[949]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 4 18:06:35.547228 ignition[949]: INFO : umount: umount passed Sep 4 18:06:35.547228 ignition[949]: INFO : Ignition finished successfully Sep 4 18:06:35.549506 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 4 18:06:35.550196 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 4 18:06:35.551430 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 4 18:06:35.551477 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 4 18:06:35.552437 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 4 18:06:35.552478 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 4 18:06:35.553757 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 4 18:06:35.553812 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 4 18:06:35.554879 systemd[1]: Stopped target network.target - Network. Sep 4 18:06:35.555870 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 4 18:06:35.555914 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 18:06:35.556934 systemd[1]: Stopped target paths.target - Path Units. Sep 4 18:06:35.557930 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 4 18:06:35.561370 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 18:06:35.562421 systemd[1]: Stopped target slices.target - Slice Units. Sep 4 18:06:35.563662 systemd[1]: Stopped target sockets.target - Socket Units. Sep 4 18:06:35.564711 systemd[1]: iscsid.socket: Deactivated successfully. Sep 4 18:06:35.564764 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 18:06:35.565734 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 4 18:06:35.565768 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 18:06:35.566738 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 4 18:06:35.566783 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 4 18:06:35.567800 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 4 18:06:35.567840 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 4 18:06:35.568917 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 4 18:06:35.570145 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 4 18:06:35.572389 systemd-networkd[705]: eth0: DHCPv6 lease lost Sep 4 18:06:35.574462 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 4 18:06:35.574571 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 4 18:06:35.575355 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 4 18:06:35.575390 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 4 18:06:35.583550 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 4 18:06:35.585640 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 4 18:06:35.585698 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 18:06:35.586300 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 18:06:35.587048 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 4 18:06:35.589432 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 4 18:06:35.602586 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 4 18:06:35.603350 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 18:06:35.606499 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 4 18:06:35.606624 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 4 18:06:35.610570 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 4 18:06:35.611873 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 4 18:06:35.611926 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 4 18:06:35.613456 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 4 18:06:35.613493 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 18:06:35.617927 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 4 18:06:35.617982 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 4 18:06:35.620281 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 4 18:06:35.620341 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 4 18:06:35.622181 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 18:06:35.622226 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 18:06:35.631462 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 4 18:06:35.633677 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 18:06:35.633759 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 18:06:35.634519 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 4 18:06:35.634569 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 4 18:06:35.635129 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 4 18:06:35.635174 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 18:06:35.636667 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 4 18:06:35.636740 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 18:06:35.637574 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 4 18:06:35.637615 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 18:06:35.638712 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 4 18:06:35.638753 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 18:06:35.640010 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 18:06:35.640048 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 18:06:35.641589 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 4 18:06:35.642352 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 4 18:06:35.873937 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 4 18:06:35.874242 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 4 18:06:35.877741 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 4 18:06:35.879591 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 4 18:06:35.879717 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 4 18:06:35.889646 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 4 18:06:35.908656 systemd[1]: Switching root. Sep 4 18:06:35.960004 systemd-journald[185]: Journal stopped Sep 4 18:06:37.449676 systemd-journald[185]: Received SIGTERM from PID 1 (systemd). Sep 4 18:06:37.449742 kernel: SELinux: policy capability network_peer_controls=1 Sep 4 18:06:37.449761 kernel: SELinux: policy capability open_perms=1 Sep 4 18:06:37.449774 kernel: SELinux: policy capability extended_socket_class=1 Sep 4 18:06:37.449792 kernel: SELinux: policy capability always_check_network=0 Sep 4 18:06:37.449804 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 4 18:06:37.449817 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 4 18:06:37.449833 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 4 18:06:37.449845 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 4 18:06:37.449858 kernel: audit: type=1403 audit(1725473196.455:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 4 18:06:37.449872 systemd[1]: Successfully loaded SELinux policy in 81.568ms. Sep 4 18:06:37.449895 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.125ms. Sep 4 18:06:37.449910 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 4 18:06:37.449923 systemd[1]: Detected virtualization kvm. Sep 4 18:06:37.449937 systemd[1]: Detected architecture x86-64. Sep 4 18:06:37.449952 systemd[1]: Detected first boot. Sep 4 18:06:37.449969 systemd[1]: Hostname set to . Sep 4 18:06:37.449982 systemd[1]: Initializing machine ID from VM UUID. Sep 4 18:06:37.449996 zram_generator::config[990]: No configuration found. Sep 4 18:06:37.450010 systemd[1]: Populated /etc with preset unit settings. Sep 4 18:06:37.450024 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 4 18:06:37.450038 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 4 18:06:37.450057 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 4 18:06:37.450086 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 4 18:06:37.450104 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 4 18:06:37.450118 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 4 18:06:37.450131 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 4 18:06:37.450145 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 4 18:06:37.450158 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 4 18:06:37.450172 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 4 18:06:37.450185 systemd[1]: Created slice user.slice - User and Session Slice. Sep 4 18:06:37.450202 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 18:06:37.450215 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 18:06:37.450229 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 4 18:06:37.450243 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 4 18:06:37.450256 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 4 18:06:37.450276 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 18:06:37.450289 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 4 18:06:37.450303 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 18:06:37.451357 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 4 18:06:37.451381 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 4 18:06:37.451397 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 4 18:06:37.451417 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 4 18:06:37.451439 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 18:06:37.451455 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 18:06:37.451469 systemd[1]: Reached target slices.target - Slice Units. Sep 4 18:06:37.451482 systemd[1]: Reached target swap.target - Swaps. Sep 4 18:06:37.451502 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 4 18:06:37.451523 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 4 18:06:37.451543 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 18:06:37.451557 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 18:06:37.451575 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 18:06:37.451595 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 4 18:06:37.451615 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 4 18:06:37.451637 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 4 18:06:37.451655 systemd[1]: Mounting media.mount - External Media Directory... Sep 4 18:06:37.451681 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 18:06:37.451701 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 4 18:06:37.451722 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 4 18:06:37.451738 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 4 18:06:37.451753 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 4 18:06:37.451767 systemd[1]: Reached target machines.target - Containers. Sep 4 18:06:37.451780 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 4 18:06:37.451793 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 18:06:37.451811 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 18:06:37.451825 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 4 18:06:37.451838 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 18:06:37.451851 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 18:06:37.451864 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 18:06:37.451878 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 4 18:06:37.451891 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 18:06:37.451905 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 4 18:06:37.451918 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 4 18:06:37.451933 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 4 18:06:37.451947 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 4 18:06:37.451960 systemd[1]: Stopped systemd-fsck-usr.service. Sep 4 18:06:37.451973 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 18:06:37.451986 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 18:06:37.451999 kernel: fuse: init (API version 7.39) Sep 4 18:06:37.452012 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 18:06:37.452026 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 4 18:06:37.452039 kernel: loop: module loaded Sep 4 18:06:37.452054 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 18:06:37.452068 systemd[1]: verity-setup.service: Deactivated successfully. Sep 4 18:06:37.452081 systemd[1]: Stopped verity-setup.service. Sep 4 18:06:37.452094 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 18:06:37.452109 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 4 18:06:37.452128 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 4 18:06:37.452148 systemd[1]: Mounted media.mount - External Media Directory. Sep 4 18:06:37.452169 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 4 18:06:37.452188 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 4 18:06:37.452202 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 4 18:06:37.452217 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 18:06:37.452237 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 4 18:06:37.452259 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 4 18:06:37.452280 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 18:06:37.452294 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 18:06:37.452311 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 18:06:37.452352 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 18:06:37.452365 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 4 18:06:37.452379 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 4 18:06:37.452395 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 18:06:37.452409 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 18:06:37.452422 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 18:06:37.452436 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 18:06:37.452469 systemd-journald[1071]: Collecting audit messages is disabled. Sep 4 18:06:37.452495 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 4 18:06:37.452509 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 18:06:37.452530 systemd-journald[1071]: Journal started Sep 4 18:06:37.452567 systemd-journald[1071]: Runtime Journal (/run/log/journal/7f8af4de858b4b94be67eeca1b00e739) is 4.9M, max 39.3M, 34.4M free. Sep 4 18:06:37.111479 systemd[1]: Queued start job for default target multi-user.target. Sep 4 18:06:37.134574 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 4 18:06:37.135029 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 4 18:06:37.463364 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 4 18:06:37.476765 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 4 18:06:37.476825 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 4 18:06:37.480004 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 18:06:37.482655 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 4 18:06:37.486392 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 4 18:06:37.498358 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 4 18:06:37.507389 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 18:06:37.522296 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 4 18:06:37.522379 kernel: ACPI: bus type drm_connector registered Sep 4 18:06:37.526356 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 18:06:37.529391 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 4 18:06:37.532363 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 18:06:37.542352 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 18:06:37.555428 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 4 18:06:37.568492 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 18:06:37.577991 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 18:06:37.577532 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 4 18:06:37.580662 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 18:06:37.580832 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 18:06:37.581717 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 18:06:37.582490 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 4 18:06:37.583173 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 4 18:06:37.584438 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 4 18:06:37.585676 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 4 18:06:37.594351 kernel: loop0: detected capacity change from 0 to 89336 Sep 4 18:06:37.622612 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 4 18:06:37.628811 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 4 18:06:37.636555 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 4 18:06:37.638646 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 4 18:06:37.641136 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 18:06:37.650953 systemd-journald[1071]: Time spent on flushing to /var/log/journal/7f8af4de858b4b94be67eeca1b00e739 is 42.158ms for 945 entries. Sep 4 18:06:37.650953 systemd-journald[1071]: System Journal (/var/log/journal/7f8af4de858b4b94be67eeca1b00e739) is 8.0M, max 584.8M, 576.8M free. Sep 4 18:06:37.705653 systemd-journald[1071]: Received client request to flush runtime journal. Sep 4 18:06:37.705707 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 4 18:06:37.705728 kernel: loop1: detected capacity change from 0 to 8 Sep 4 18:06:37.676561 udevadm[1133]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 4 18:06:37.708826 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 4 18:06:37.732388 kernel: loop2: detected capacity change from 0 to 211296 Sep 4 18:06:37.733611 systemd-tmpfiles[1105]: ACLs are not supported, ignoring. Sep 4 18:06:37.733630 systemd-tmpfiles[1105]: ACLs are not supported, ignoring. Sep 4 18:06:37.742942 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 18:06:37.753285 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 4 18:06:37.766206 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 4 18:06:37.767868 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 4 18:06:37.809123 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 4 18:06:37.817347 kernel: loop3: detected capacity change from 0 to 140728 Sep 4 18:06:37.817615 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 18:06:37.854800 systemd-tmpfiles[1146]: ACLs are not supported, ignoring. Sep 4 18:06:37.854821 systemd-tmpfiles[1146]: ACLs are not supported, ignoring. Sep 4 18:06:37.866767 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 18:06:37.884333 kernel: loop4: detected capacity change from 0 to 89336 Sep 4 18:06:37.910361 kernel: loop5: detected capacity change from 0 to 8 Sep 4 18:06:37.922354 kernel: loop6: detected capacity change from 0 to 211296 Sep 4 18:06:37.968356 kernel: loop7: detected capacity change from 0 to 140728 Sep 4 18:06:38.045187 (sd-merge)[1151]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Sep 4 18:06:38.046163 (sd-merge)[1151]: Merged extensions into '/usr'. Sep 4 18:06:38.052506 systemd[1]: Reloading requested from client PID 1104 ('systemd-sysext') (unit systemd-sysext.service)... Sep 4 18:06:38.052615 systemd[1]: Reloading... Sep 4 18:06:38.152343 zram_generator::config[1173]: No configuration found. Sep 4 18:06:38.439519 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 18:06:38.528359 systemd[1]: Reloading finished in 475 ms. Sep 4 18:06:38.548275 ldconfig[1100]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 4 18:06:38.562411 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 4 18:06:38.563289 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 4 18:06:38.568158 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 4 18:06:38.574519 systemd[1]: Starting ensure-sysext.service... Sep 4 18:06:38.578546 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 18:06:38.583511 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 18:06:38.587526 systemd[1]: Reloading requested from client PID 1232 ('systemctl') (unit ensure-sysext.service)... Sep 4 18:06:38.587537 systemd[1]: Reloading... Sep 4 18:06:38.615921 systemd-tmpfiles[1233]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 4 18:06:38.616656 systemd-tmpfiles[1233]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 4 18:06:38.619034 systemd-tmpfiles[1233]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 4 18:06:38.619203 systemd-udevd[1234]: Using default interface naming scheme 'v255'. Sep 4 18:06:38.620630 systemd-tmpfiles[1233]: ACLs are not supported, ignoring. Sep 4 18:06:38.620711 systemd-tmpfiles[1233]: ACLs are not supported, ignoring. Sep 4 18:06:38.623826 systemd-tmpfiles[1233]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 18:06:38.623839 systemd-tmpfiles[1233]: Skipping /boot Sep 4 18:06:38.635021 systemd-tmpfiles[1233]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 18:06:38.635038 systemd-tmpfiles[1233]: Skipping /boot Sep 4 18:06:38.713870 zram_generator::config[1273]: No configuration found. Sep 4 18:06:38.758291 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1262) Sep 4 18:06:38.765806 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1262) Sep 4 18:06:38.800338 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1252) Sep 4 18:06:38.841368 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Sep 4 18:06:38.852403 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 4 18:06:38.869359 kernel: ACPI: button: Power Button [PWRF] Sep 4 18:06:38.900361 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 4 18:06:38.927354 kernel: mousedev: PS/2 mouse device common for all mice Sep 4 18:06:38.942555 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Sep 4 18:06:38.942644 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Sep 4 18:06:38.947339 kernel: Console: switching to colour dummy device 80x25 Sep 4 18:06:38.950747 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Sep 4 18:06:38.950793 kernel: [drm] features: -context_init Sep 4 18:06:38.954336 kernel: [drm] number of scanouts: 1 Sep 4 18:06:38.955790 kernel: [drm] number of cap sets: 0 Sep 4 18:06:38.955828 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Sep 4 18:06:38.969343 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Sep 4 18:06:38.973350 kernel: Console: switching to colour frame buffer device 128x48 Sep 4 18:06:38.975832 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 18:06:38.977346 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Sep 4 18:06:39.052776 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 4 18:06:39.053104 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 4 18:06:39.055607 systemd[1]: Reloading finished in 467 ms. Sep 4 18:06:39.074101 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 18:06:39.077103 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 18:06:39.108535 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 4 18:06:39.113475 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 4 18:06:39.124782 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 4 18:06:39.128492 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 4 18:06:39.132626 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 18:06:39.138531 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 18:06:39.148604 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 4 18:06:39.159075 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 18:06:39.170103 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 18:06:39.170844 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 18:06:39.179516 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 18:06:39.184890 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 18:06:39.194947 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 18:06:39.196021 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 18:06:39.203752 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 4 18:06:39.204114 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 18:06:39.206900 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 18:06:39.207061 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 18:06:39.210454 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 18:06:39.210629 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 18:06:39.211072 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 18:06:39.211187 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 18:06:39.219658 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 4 18:06:39.226671 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 18:06:39.226852 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 18:06:39.231545 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 4 18:06:39.247505 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 4 18:06:39.252871 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 18:06:39.253194 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 18:06:39.264657 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 18:06:39.271260 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 18:06:39.278584 augenrules[1377]: No rules Sep 4 18:06:39.282651 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 18:06:39.287639 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 18:06:39.290176 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 18:06:39.306927 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 4 18:06:39.311628 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 18:06:39.312281 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 18:06:39.314807 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 4 18:06:39.319384 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 4 18:06:39.323621 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 4 18:06:39.324643 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 18:06:39.324813 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 18:06:39.327562 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 18:06:39.327703 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 18:06:39.332272 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 18:06:39.332490 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 18:06:39.336242 systemd[1]: Finished ensure-sysext.service. Sep 4 18:06:39.346789 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 4 18:06:39.351832 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 18:06:39.351992 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 18:06:39.356098 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 4 18:06:39.383565 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 4 18:06:39.384289 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 18:06:39.384415 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 18:06:39.400526 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 4 18:06:39.401390 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 4 18:06:39.411342 lvm[1402]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 18:06:39.442309 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 4 18:06:39.443199 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 18:06:39.461611 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 4 18:06:39.474260 systemd-networkd[1346]: lo: Link UP Sep 4 18:06:39.474266 systemd-networkd[1346]: lo: Gained carrier Sep 4 18:06:39.475727 systemd-resolved[1347]: Positive Trust Anchors: Sep 4 18:06:39.475745 systemd-resolved[1347]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 18:06:39.475790 systemd-resolved[1347]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 18:06:39.477246 systemd-networkd[1346]: Enumeration completed Sep 4 18:06:39.477350 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 18:06:39.488426 lvm[1407]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 18:06:39.489075 systemd-resolved[1347]: Using system hostname 'ci-4054-1-0-c-4d101ae770.novalocal'. Sep 4 18:06:39.492487 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 4 18:06:39.493259 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 18:06:39.497009 systemd[1]: Reached target network.target - Network. Sep 4 18:06:39.497516 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 18:06:39.501016 systemd-networkd[1346]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 18:06:39.501414 systemd-networkd[1346]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 18:06:39.502370 systemd-networkd[1346]: eth0: Link UP Sep 4 18:06:39.502547 systemd-networkd[1346]: eth0: Gained carrier Sep 4 18:06:39.502608 systemd-networkd[1346]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 18:06:39.514378 systemd-networkd[1346]: eth0: DHCPv4 address 172.24.4.134/24, gateway 172.24.4.1 acquired from 172.24.4.1 Sep 4 18:06:39.526483 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 4 18:06:39.528291 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 4 18:06:39.529137 systemd[1]: Reached target time-set.target - System Time Set. Sep 4 18:06:39.540006 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 18:06:39.542171 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 18:06:39.544524 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 4 18:06:39.545092 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 4 18:06:39.545790 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 4 18:06:39.547713 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 4 18:06:39.550555 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 4 18:06:39.554254 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 4 18:06:39.554562 systemd[1]: Reached target paths.target - Path Units. Sep 4 18:06:40.216831 systemd[1]: Reached target timers.target - Timer Units. Sep 4 18:06:40.217794 systemd-resolved[1347]: Clock change detected. Flushing caches. Sep 4 18:06:40.218304 systemd-timesyncd[1403]: Contacted time server 82.64.42.185:123 (0.flatcar.pool.ntp.org). Sep 4 18:06:40.218370 systemd-timesyncd[1403]: Initial clock synchronization to Wed 2024-09-04 18:06:40.216773 UTC. Sep 4 18:06:40.222321 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 4 18:06:40.228687 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 4 18:06:40.238690 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 4 18:06:40.240284 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 4 18:06:40.242908 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 18:06:40.243472 systemd[1]: Reached target basic.target - Basic System. Sep 4 18:06:40.244017 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 4 18:06:40.244057 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 4 18:06:40.247081 systemd[1]: Starting containerd.service - containerd container runtime... Sep 4 18:06:40.259974 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 4 18:06:40.265995 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 4 18:06:40.280997 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 4 18:06:40.288859 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 4 18:06:40.289764 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 4 18:06:40.293126 jq[1421]: false Sep 4 18:06:40.299939 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 4 18:06:40.309909 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 4 18:06:40.315862 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 4 18:06:40.326936 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 4 18:06:40.338222 dbus-daemon[1420]: [system] SELinux support is enabled Sep 4 18:06:40.338683 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 4 18:06:40.340718 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 4 18:06:40.341604 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 4 18:06:40.344329 systemd[1]: Starting update-engine.service - Update Engine... Sep 4 18:06:40.355785 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 4 18:06:40.357907 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 4 18:06:40.359015 extend-filesystems[1422]: Found loop4 Sep 4 18:06:40.362110 extend-filesystems[1422]: Found loop5 Sep 4 18:06:40.362110 extend-filesystems[1422]: Found loop6 Sep 4 18:06:40.362110 extend-filesystems[1422]: Found loop7 Sep 4 18:06:40.362110 extend-filesystems[1422]: Found vda Sep 4 18:06:40.362110 extend-filesystems[1422]: Found vda1 Sep 4 18:06:40.362110 extend-filesystems[1422]: Found vda2 Sep 4 18:06:40.362110 extend-filesystems[1422]: Found vda3 Sep 4 18:06:40.362110 extend-filesystems[1422]: Found usr Sep 4 18:06:40.362110 extend-filesystems[1422]: Found vda4 Sep 4 18:06:40.362110 extend-filesystems[1422]: Found vda6 Sep 4 18:06:40.362110 extend-filesystems[1422]: Found vda7 Sep 4 18:06:40.362110 extend-filesystems[1422]: Found vda9 Sep 4 18:06:40.362110 extend-filesystems[1422]: Checking size of /dev/vda9 Sep 4 18:06:40.500278 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 4635643 blocks Sep 4 18:06:40.375160 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 4 18:06:40.500557 extend-filesystems[1422]: Resized partition /dev/vda9 Sep 4 18:06:40.376224 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 4 18:06:40.501270 extend-filesystems[1453]: resize2fs 1.47.1 (20-May-2024) Sep 4 18:06:40.514533 jq[1436]: true Sep 4 18:06:40.523778 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1253) Sep 4 18:06:40.376561 systemd[1]: motdgen.service: Deactivated successfully. Sep 4 18:06:40.523965 update_engine[1435]: I0904 18:06:40.447724 1435 main.cc:92] Flatcar Update Engine starting Sep 4 18:06:40.523965 update_engine[1435]: I0904 18:06:40.449784 1435 update_check_scheduler.cc:74] Next update check in 11m50s Sep 4 18:06:40.377741 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 4 18:06:40.398992 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 4 18:06:40.535989 tar[1443]: linux-amd64/helm Sep 4 18:06:40.399727 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 4 18:06:40.536306 jq[1446]: true Sep 4 18:06:40.423884 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 4 18:06:40.423913 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 4 18:06:40.453744 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 4 18:06:40.453778 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 4 18:06:40.457938 systemd[1]: Started update-engine.service - Update Engine. Sep 4 18:06:40.474986 (ntainerd)[1450]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 4 18:06:40.484493 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 4 18:06:40.557455 systemd-logind[1430]: New seat seat0. Sep 4 18:06:40.569360 systemd-logind[1430]: Watching system buttons on /dev/input/event1 (Power Button) Sep 4 18:06:40.569387 systemd-logind[1430]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 4 18:06:40.569671 systemd[1]: Started systemd-logind.service - User Login Management. Sep 4 18:06:40.725858 locksmithd[1455]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 4 18:06:41.391815 containerd[1450]: time="2024-09-04T18:06:41.391307387Z" level=info msg="starting containerd" revision=8ccfc03e4e2b73c22899202ae09d0caf906d3863 version=v1.7.20 Sep 4 18:06:41.396773 sshd_keygen[1442]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 4 18:06:41.481687 containerd[1450]: time="2024-09-04T18:06:41.481131111Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 4 18:06:41.483628 containerd[1450]: time="2024-09-04T18:06:41.483539108Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.48-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 4 18:06:41.483628 containerd[1450]: time="2024-09-04T18:06:41.483568443Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 4 18:06:41.483628 containerd[1450]: time="2024-09-04T18:06:41.483587809Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 4 18:06:41.485822 containerd[1450]: time="2024-09-04T18:06:41.483787363Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 4 18:06:41.485822 containerd[1450]: time="2024-09-04T18:06:41.483818071Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 4 18:06:41.485822 containerd[1450]: time="2024-09-04T18:06:41.483890847Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 18:06:41.485822 containerd[1450]: time="2024-09-04T18:06:41.483907378Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 4 18:06:41.485822 containerd[1450]: time="2024-09-04T18:06:41.484091674Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 18:06:41.485822 containerd[1450]: time="2024-09-04T18:06:41.484118143Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 4 18:06:41.485822 containerd[1450]: time="2024-09-04T18:06:41.484135516Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 18:06:41.485822 containerd[1450]: time="2024-09-04T18:06:41.484147519Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 4 18:06:41.485822 containerd[1450]: time="2024-09-04T18:06:41.484236025Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 4 18:06:41.485822 containerd[1450]: time="2024-09-04T18:06:41.484569821Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 4 18:06:41.485822 containerd[1450]: time="2024-09-04T18:06:41.485373889Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 18:06:41.484164 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 4 18:06:41.486432 containerd[1450]: time="2024-09-04T18:06:41.485407592Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 4 18:06:41.486432 containerd[1450]: time="2024-09-04T18:06:41.485513981Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 4 18:06:41.486432 containerd[1450]: time="2024-09-04T18:06:41.485575637Z" level=info msg="metadata content store policy set" policy=shared Sep 4 18:06:41.495954 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 4 18:06:41.509384 systemd[1]: issuegen.service: Deactivated successfully. Sep 4 18:06:41.509585 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 4 18:06:41.520413 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 4 18:06:41.559189 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 4 18:06:41.573149 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 4 18:06:41.583089 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 4 18:06:41.584599 systemd[1]: Reached target getty.target - Login Prompts. Sep 4 18:06:41.597692 kernel: EXT4-fs (vda9): resized filesystem to 4635643 Sep 4 18:06:41.619852 systemd-networkd[1346]: eth0: Gained IPv6LL Sep 4 18:06:41.624704 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 4 18:06:41.628993 systemd[1]: Reached target network-online.target - Network is Online. Sep 4 18:06:41.662183 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 18:06:41.680926 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 4 18:06:41.745874 extend-filesystems[1453]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 4 18:06:41.745874 extend-filesystems[1453]: old_desc_blocks = 1, new_desc_blocks = 3 Sep 4 18:06:41.745874 extend-filesystems[1453]: The filesystem on /dev/vda9 is now 4635643 (4k) blocks long. Sep 4 18:06:41.783452 extend-filesystems[1422]: Resized filesystem in /dev/vda9 Sep 4 18:06:41.793852 bash[1474]: Updated "/home/core/.ssh/authorized_keys" Sep 4 18:06:41.752254 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 4 18:06:41.794381 containerd[1450]: time="2024-09-04T18:06:41.770586174Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 4 18:06:41.794381 containerd[1450]: time="2024-09-04T18:06:41.770762545Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 4 18:06:41.794381 containerd[1450]: time="2024-09-04T18:06:41.770813230Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 4 18:06:41.794381 containerd[1450]: time="2024-09-04T18:06:41.770855459Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 4 18:06:41.794381 containerd[1450]: time="2024-09-04T18:06:41.770898941Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 4 18:06:41.794381 containerd[1450]: time="2024-09-04T18:06:41.771228980Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 4 18:06:41.794381 containerd[1450]: time="2024-09-04T18:06:41.771857719Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 4 18:06:41.794381 containerd[1450]: time="2024-09-04T18:06:41.772138866Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 4 18:06:41.794381 containerd[1450]: time="2024-09-04T18:06:41.772188720Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 4 18:06:41.794381 containerd[1450]: time="2024-09-04T18:06:41.772259523Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 4 18:06:41.794381 containerd[1450]: time="2024-09-04T18:06:41.772301231Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 4 18:06:41.794381 containerd[1450]: time="2024-09-04T18:06:41.772336928Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 4 18:06:41.794381 containerd[1450]: time="2024-09-04T18:06:41.772369529Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 4 18:06:41.794381 containerd[1450]: time="2024-09-04T18:06:41.772404585Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 4 18:06:41.752717 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 4 18:06:41.804206 containerd[1450]: time="2024-09-04T18:06:41.772440833Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 4 18:06:41.804206 containerd[1450]: time="2024-09-04T18:06:41.772476319Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 4 18:06:41.804206 containerd[1450]: time="2024-09-04T18:06:41.772512407Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 4 18:06:41.804206 containerd[1450]: time="2024-09-04T18:06:41.772543956Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 4 18:06:41.804206 containerd[1450]: time="2024-09-04T18:06:41.772591826Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 4 18:06:41.804206 containerd[1450]: time="2024-09-04T18:06:41.772627813Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 4 18:06:41.804206 containerd[1450]: time="2024-09-04T18:06:41.777837404Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 4 18:06:41.804206 containerd[1450]: time="2024-09-04T18:06:41.778203000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 4 18:06:41.804206 containerd[1450]: time="2024-09-04T18:06:41.783767265Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 4 18:06:41.804206 containerd[1450]: time="2024-09-04T18:06:41.783907007Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 4 18:06:41.804206 containerd[1450]: time="2024-09-04T18:06:41.783991476Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 4 18:06:41.804206 containerd[1450]: time="2024-09-04T18:06:41.784460175Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 4 18:06:41.804206 containerd[1450]: time="2024-09-04T18:06:41.784772952Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 4 18:06:41.804206 containerd[1450]: time="2024-09-04T18:06:41.785748291Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 4 18:06:41.784918 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 4 18:06:41.810836 containerd[1450]: time="2024-09-04T18:06:41.787944189Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 4 18:06:41.810836 containerd[1450]: time="2024-09-04T18:06:41.788114769Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 4 18:06:41.810836 containerd[1450]: time="2024-09-04T18:06:41.788162489Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 4 18:06:41.810836 containerd[1450]: time="2024-09-04T18:06:41.788250924Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 4 18:06:41.810836 containerd[1450]: time="2024-09-04T18:06:41.808382241Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 4 18:06:41.810836 containerd[1450]: time="2024-09-04T18:06:41.808854907Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 4 18:06:41.810836 containerd[1450]: time="2024-09-04T18:06:41.808877700Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 4 18:06:41.810836 containerd[1450]: time="2024-09-04T18:06:41.808991123Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 4 18:06:41.809292 systemd[1]: Starting sshkeys.service... Sep 4 18:06:41.813344 containerd[1450]: time="2024-09-04T18:06:41.810230427Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 4 18:06:41.813344 containerd[1450]: time="2024-09-04T18:06:41.812471550Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 4 18:06:41.813344 containerd[1450]: time="2024-09-04T18:06:41.812556480Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 4 18:06:41.813344 containerd[1450]: time="2024-09-04T18:06:41.812595322Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 4 18:06:41.813344 containerd[1450]: time="2024-09-04T18:06:41.812643282Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 4 18:06:41.813344 containerd[1450]: time="2024-09-04T18:06:41.812736126Z" level=info msg="NRI interface is disabled by configuration." Sep 4 18:06:41.813344 containerd[1450]: time="2024-09-04T18:06:41.812773406Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 4 18:06:41.814526 containerd[1450]: time="2024-09-04T18:06:41.814408463Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 4 18:06:41.815767 containerd[1450]: time="2024-09-04T18:06:41.815732396Z" level=info msg="Connect containerd service" Sep 4 18:06:41.815916 containerd[1450]: time="2024-09-04T18:06:41.815899730Z" level=info msg="using legacy CRI server" Sep 4 18:06:41.815981 containerd[1450]: time="2024-09-04T18:06:41.815967146Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 4 18:06:41.816190 containerd[1450]: time="2024-09-04T18:06:41.816169055Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 4 18:06:41.820633 containerd[1450]: time="2024-09-04T18:06:41.820590718Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 18:06:41.822114 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 4 18:06:41.825626 containerd[1450]: time="2024-09-04T18:06:41.824202662Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 4 18:06:41.825626 containerd[1450]: time="2024-09-04T18:06:41.824267273Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 4 18:06:41.825626 containerd[1450]: time="2024-09-04T18:06:41.824305214Z" level=info msg="Start subscribing containerd event" Sep 4 18:06:41.825626 containerd[1450]: time="2024-09-04T18:06:41.824354577Z" level=info msg="Start recovering state" Sep 4 18:06:41.825626 containerd[1450]: time="2024-09-04T18:06:41.824430900Z" level=info msg="Start event monitor" Sep 4 18:06:41.825626 containerd[1450]: time="2024-09-04T18:06:41.824451739Z" level=info msg="Start snapshots syncer" Sep 4 18:06:41.825626 containerd[1450]: time="2024-09-04T18:06:41.824462539Z" level=info msg="Start cni network conf syncer for default" Sep 4 18:06:41.825626 containerd[1450]: time="2024-09-04T18:06:41.824471636Z" level=info msg="Start streaming server" Sep 4 18:06:41.825626 containerd[1450]: time="2024-09-04T18:06:41.824536157Z" level=info msg="containerd successfully booted in 0.517128s" Sep 4 18:06:41.831895 systemd[1]: Started containerd.service - containerd container runtime. Sep 4 18:06:41.854684 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 4 18:06:41.868123 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 4 18:06:42.056185 tar[1443]: linux-amd64/LICENSE Sep 4 18:06:42.056690 tar[1443]: linux-amd64/README.md Sep 4 18:06:42.066591 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 4 18:06:43.130804 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 4 18:06:43.145578 systemd[1]: Started sshd@0-172.24.4.134:22-172.24.4.1:33200.service - OpenSSH per-connection server daemon (172.24.4.1:33200). Sep 4 18:06:43.322993 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 18:06:43.335746 (kubelet)[1535]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 18:06:44.451450 sshd[1528]: Accepted publickey for core from 172.24.4.1 port 33200 ssh2: RSA SHA256:JnA7Fh8lVkr6ENifNOXj431OPLJBOL+/PI8dMas4Eok Sep 4 18:06:44.456288 sshd[1528]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 18:06:44.474216 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 4 18:06:44.491441 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 4 18:06:44.500210 systemd-logind[1430]: New session 1 of user core. Sep 4 18:06:44.521412 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 4 18:06:44.534072 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 4 18:06:44.546078 (systemd)[1544]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 4 18:06:44.695237 systemd[1544]: Queued start job for default target default.target. Sep 4 18:06:44.700940 systemd[1544]: Created slice app.slice - User Application Slice. Sep 4 18:06:44.701069 systemd[1544]: Reached target paths.target - Paths. Sep 4 18:06:44.701157 systemd[1544]: Reached target timers.target - Timers. Sep 4 18:06:44.702796 systemd[1544]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 4 18:06:44.725152 systemd[1544]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 4 18:06:44.725407 systemd[1544]: Reached target sockets.target - Sockets. Sep 4 18:06:44.725427 systemd[1544]: Reached target basic.target - Basic System. Sep 4 18:06:44.725472 systemd[1544]: Reached target default.target - Main User Target. Sep 4 18:06:44.725501 systemd[1544]: Startup finished in 172ms. Sep 4 18:06:44.725595 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 4 18:06:44.736683 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 4 18:06:44.807786 kubelet[1535]: E0904 18:06:44.807626 1535 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 18:06:44.813575 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 18:06:44.813964 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 18:06:44.814964 systemd[1]: kubelet.service: Consumed 1.905s CPU time. Sep 4 18:06:45.220928 systemd[1]: Started sshd@1-172.24.4.134:22-172.24.4.1:40130.service - OpenSSH per-connection server daemon (172.24.4.1:40130). Sep 4 18:06:46.632802 login[1501]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 4 18:06:46.638380 login[1502]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 4 18:06:46.644384 systemd-logind[1430]: New session 2 of user core. Sep 4 18:06:46.654148 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 4 18:06:46.662376 systemd-logind[1430]: New session 3 of user core. Sep 4 18:06:46.676082 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 4 18:06:46.866866 sshd[1558]: Accepted publickey for core from 172.24.4.1 port 40130 ssh2: RSA SHA256:JnA7Fh8lVkr6ENifNOXj431OPLJBOL+/PI8dMas4Eok Sep 4 18:06:46.869959 sshd[1558]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 18:06:46.880714 systemd-logind[1430]: New session 4 of user core. Sep 4 18:06:46.895291 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 4 18:06:47.339912 coreos-metadata[1417]: Sep 04 18:06:47.339 WARN failed to locate config-drive, using the metadata service API instead Sep 4 18:06:47.412001 coreos-metadata[1417]: Sep 04 18:06:47.411 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Sep 4 18:06:47.615589 coreos-metadata[1417]: Sep 04 18:06:47.615 INFO Fetch successful Sep 4 18:06:47.615589 coreos-metadata[1417]: Sep 04 18:06:47.615 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Sep 4 18:06:47.624274 coreos-metadata[1417]: Sep 04 18:06:47.624 INFO Fetch successful Sep 4 18:06:47.624274 coreos-metadata[1417]: Sep 04 18:06:47.624 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Sep 4 18:06:47.633092 coreos-metadata[1417]: Sep 04 18:06:47.633 INFO Fetch successful Sep 4 18:06:47.633092 coreos-metadata[1417]: Sep 04 18:06:47.633 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Sep 4 18:06:47.646643 coreos-metadata[1417]: Sep 04 18:06:47.646 INFO Fetch successful Sep 4 18:06:47.646643 coreos-metadata[1417]: Sep 04 18:06:47.646 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Sep 4 18:06:47.656996 coreos-metadata[1417]: Sep 04 18:06:47.656 INFO Fetch successful Sep 4 18:06:47.656996 coreos-metadata[1417]: Sep 04 18:06:47.656 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Sep 4 18:06:47.666075 sshd[1558]: pam_unix(sshd:session): session closed for user core Sep 4 18:06:47.671472 coreos-metadata[1417]: Sep 04 18:06:47.668 INFO Fetch successful Sep 4 18:06:47.695518 systemd[1]: sshd@1-172.24.4.134:22-172.24.4.1:40130.service: Deactivated successfully. Sep 4 18:06:47.701744 systemd[1]: session-4.scope: Deactivated successfully. Sep 4 18:06:47.703970 systemd-logind[1430]: Session 4 logged out. Waiting for processes to exit. Sep 4 18:06:47.714502 systemd[1]: Started sshd@2-172.24.4.134:22-172.24.4.1:40146.service - OpenSSH per-connection server daemon (172.24.4.1:40146). Sep 4 18:06:47.718246 systemd-logind[1430]: Removed session 4. Sep 4 18:06:47.742376 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 4 18:06:47.743911 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 4 18:06:48.956211 coreos-metadata[1520]: Sep 04 18:06:48.956 WARN failed to locate config-drive, using the metadata service API instead Sep 4 18:06:48.976051 sshd[1595]: Accepted publickey for core from 172.24.4.1 port 40146 ssh2: RSA SHA256:JnA7Fh8lVkr6ENifNOXj431OPLJBOL+/PI8dMas4Eok Sep 4 18:06:48.977251 sshd[1595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 18:06:48.991822 systemd-logind[1430]: New session 5 of user core. Sep 4 18:06:48.999085 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 4 18:06:49.007129 coreos-metadata[1520]: Sep 04 18:06:49.006 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Sep 4 18:06:49.023770 coreos-metadata[1520]: Sep 04 18:06:49.023 INFO Fetch successful Sep 4 18:06:49.023770 coreos-metadata[1520]: Sep 04 18:06:49.023 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Sep 4 18:06:49.040273 coreos-metadata[1520]: Sep 04 18:06:49.040 INFO Fetch successful Sep 4 18:06:49.046381 unknown[1520]: wrote ssh authorized keys file for user: core Sep 4 18:06:49.090591 update-ssh-keys[1603]: Updated "/home/core/.ssh/authorized_keys" Sep 4 18:06:49.092855 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 4 18:06:49.096694 systemd[1]: Finished sshkeys.service. Sep 4 18:06:49.101366 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 4 18:06:49.102381 systemd[1]: Startup finished in 1.086s (kernel) + 15.709s (initrd) + 12.068s (userspace) = 28.864s. Sep 4 18:06:49.606275 sshd[1595]: pam_unix(sshd:session): session closed for user core Sep 4 18:06:49.613458 systemd[1]: sshd@2-172.24.4.134:22-172.24.4.1:40146.service: Deactivated successfully. Sep 4 18:06:49.616909 systemd[1]: session-5.scope: Deactivated successfully. Sep 4 18:06:49.618448 systemd-logind[1430]: Session 5 logged out. Waiting for processes to exit. Sep 4 18:06:49.621165 systemd-logind[1430]: Removed session 5. Sep 4 18:06:54.913716 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 4 18:06:54.925042 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 18:06:55.443101 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 18:06:55.446177 (kubelet)[1618]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 18:06:56.395456 kubelet[1618]: E0904 18:06:56.395302 1618 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 18:06:56.403036 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 18:06:56.403368 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 18:06:59.630238 systemd[1]: Started sshd@3-172.24.4.134:22-172.24.4.1:57006.service - OpenSSH per-connection server daemon (172.24.4.1:57006). Sep 4 18:07:01.080153 sshd[1627]: Accepted publickey for core from 172.24.4.1 port 57006 ssh2: RSA SHA256:JnA7Fh8lVkr6ENifNOXj431OPLJBOL+/PI8dMas4Eok Sep 4 18:07:01.082702 sshd[1627]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 18:07:01.107090 systemd-logind[1430]: New session 6 of user core. Sep 4 18:07:01.117015 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 4 18:07:01.743291 sshd[1627]: pam_unix(sshd:session): session closed for user core Sep 4 18:07:01.756020 systemd[1]: sshd@3-172.24.4.134:22-172.24.4.1:57006.service: Deactivated successfully. Sep 4 18:07:01.761328 systemd[1]: session-6.scope: Deactivated successfully. Sep 4 18:07:01.765121 systemd-logind[1430]: Session 6 logged out. Waiting for processes to exit. Sep 4 18:07:01.774443 systemd[1]: Started sshd@4-172.24.4.134:22-172.24.4.1:57020.service - OpenSSH per-connection server daemon (172.24.4.1:57020). Sep 4 18:07:01.778306 systemd-logind[1430]: Removed session 6. Sep 4 18:07:03.214737 sshd[1634]: Accepted publickey for core from 172.24.4.1 port 57020 ssh2: RSA SHA256:JnA7Fh8lVkr6ENifNOXj431OPLJBOL+/PI8dMas4Eok Sep 4 18:07:03.217108 sshd[1634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 18:07:03.226584 systemd-logind[1430]: New session 7 of user core. Sep 4 18:07:03.237046 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 4 18:07:03.812337 sshd[1634]: pam_unix(sshd:session): session closed for user core Sep 4 18:07:03.819059 systemd[1]: sshd@4-172.24.4.134:22-172.24.4.1:57020.service: Deactivated successfully. Sep 4 18:07:03.820411 systemd[1]: session-7.scope: Deactivated successfully. Sep 4 18:07:03.822315 systemd-logind[1430]: Session 7 logged out. Waiting for processes to exit. Sep 4 18:07:03.830497 systemd[1]: Started sshd@5-172.24.4.134:22-172.24.4.1:57028.service - OpenSSH per-connection server daemon (172.24.4.1:57028). Sep 4 18:07:03.835806 systemd-logind[1430]: Removed session 7. Sep 4 18:07:05.522575 sshd[1641]: Accepted publickey for core from 172.24.4.1 port 57028 ssh2: RSA SHA256:JnA7Fh8lVkr6ENifNOXj431OPLJBOL+/PI8dMas4Eok Sep 4 18:07:05.524630 sshd[1641]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 18:07:05.530136 systemd-logind[1430]: New session 8 of user core. Sep 4 18:07:05.539812 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 4 18:07:06.226017 sshd[1641]: pam_unix(sshd:session): session closed for user core Sep 4 18:07:06.243926 systemd[1]: sshd@5-172.24.4.134:22-172.24.4.1:57028.service: Deactivated successfully. Sep 4 18:07:06.250052 systemd[1]: session-8.scope: Deactivated successfully. Sep 4 18:07:06.253861 systemd-logind[1430]: Session 8 logged out. Waiting for processes to exit. Sep 4 18:07:06.267464 systemd[1]: Started sshd@6-172.24.4.134:22-172.24.4.1:54082.service - OpenSSH per-connection server daemon (172.24.4.1:54082). Sep 4 18:07:06.272194 systemd-logind[1430]: Removed session 8. Sep 4 18:07:06.413521 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 4 18:07:06.421236 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 18:07:06.786069 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 18:07:06.803599 (kubelet)[1658]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 18:07:06.895017 kubelet[1658]: E0904 18:07:06.894905 1658 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 18:07:06.899305 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 18:07:06.899482 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 18:07:07.756647 sshd[1648]: Accepted publickey for core from 172.24.4.1 port 54082 ssh2: RSA SHA256:JnA7Fh8lVkr6ENifNOXj431OPLJBOL+/PI8dMas4Eok Sep 4 18:07:07.759549 sshd[1648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 18:07:07.770876 systemd-logind[1430]: New session 9 of user core. Sep 4 18:07:07.777986 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 4 18:07:08.271062 sudo[1668]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 4 18:07:08.271851 sudo[1668]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 18:07:08.300606 sudo[1668]: pam_unix(sudo:session): session closed for user root Sep 4 18:07:08.469509 sshd[1648]: pam_unix(sshd:session): session closed for user core Sep 4 18:07:08.478170 systemd[1]: sshd@6-172.24.4.134:22-172.24.4.1:54082.service: Deactivated successfully. Sep 4 18:07:08.480243 systemd[1]: session-9.scope: Deactivated successfully. Sep 4 18:07:08.482231 systemd-logind[1430]: Session 9 logged out. Waiting for processes to exit. Sep 4 18:07:08.489377 systemd[1]: Started sshd@7-172.24.4.134:22-172.24.4.1:54092.service - OpenSSH per-connection server daemon (172.24.4.1:54092). Sep 4 18:07:08.491957 systemd-logind[1430]: Removed session 9. Sep 4 18:07:09.811819 sshd[1673]: Accepted publickey for core from 172.24.4.1 port 54092 ssh2: RSA SHA256:JnA7Fh8lVkr6ENifNOXj431OPLJBOL+/PI8dMas4Eok Sep 4 18:07:09.814537 sshd[1673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 18:07:09.825334 systemd-logind[1430]: New session 10 of user core. Sep 4 18:07:09.831965 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 4 18:07:10.138270 sudo[1677]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 4 18:07:10.139011 sudo[1677]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 18:07:10.146022 sudo[1677]: pam_unix(sudo:session): session closed for user root Sep 4 18:07:10.158704 sudo[1676]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 4 18:07:10.159798 sudo[1676]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 18:07:10.194331 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 4 18:07:10.198856 auditctl[1680]: No rules Sep 4 18:07:10.199614 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 18:07:10.200192 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 4 18:07:10.210388 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 4 18:07:10.276354 augenrules[1698]: No rules Sep 4 18:07:10.277505 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 4 18:07:10.280557 sudo[1676]: pam_unix(sudo:session): session closed for user root Sep 4 18:07:10.436798 sshd[1673]: pam_unix(sshd:session): session closed for user core Sep 4 18:07:10.452051 systemd[1]: sshd@7-172.24.4.134:22-172.24.4.1:54092.service: Deactivated successfully. Sep 4 18:07:10.456961 systemd[1]: session-10.scope: Deactivated successfully. Sep 4 18:07:10.461996 systemd-logind[1430]: Session 10 logged out. Waiting for processes to exit. Sep 4 18:07:10.469285 systemd[1]: Started sshd@8-172.24.4.134:22-172.24.4.1:54098.service - OpenSSH per-connection server daemon (172.24.4.1:54098). Sep 4 18:07:10.472582 systemd-logind[1430]: Removed session 10. Sep 4 18:07:11.761157 sshd[1706]: Accepted publickey for core from 172.24.4.1 port 54098 ssh2: RSA SHA256:JnA7Fh8lVkr6ENifNOXj431OPLJBOL+/PI8dMas4Eok Sep 4 18:07:11.764464 sshd[1706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 18:07:11.776202 systemd-logind[1430]: New session 11 of user core. Sep 4 18:07:11.788089 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 4 18:07:12.220229 sudo[1709]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 4 18:07:12.221494 sudo[1709]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 18:07:12.496094 (dockerd)[1719]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 4 18:07:12.496239 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 4 18:07:13.156345 dockerd[1719]: time="2024-09-04T18:07:13.156228969Z" level=info msg="Starting up" Sep 4 18:07:13.333444 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1250745184-merged.mount: Deactivated successfully. Sep 4 18:07:13.377123 systemd[1]: var-lib-docker-metacopy\x2dcheck3825054002-merged.mount: Deactivated successfully. Sep 4 18:07:13.426152 dockerd[1719]: time="2024-09-04T18:07:13.426023564Z" level=info msg="Loading containers: start." Sep 4 18:07:13.588865 kernel: Initializing XFRM netlink socket Sep 4 18:07:13.780714 systemd-networkd[1346]: docker0: Link UP Sep 4 18:07:13.869977 dockerd[1719]: time="2024-09-04T18:07:13.869877165Z" level=info msg="Loading containers: done." Sep 4 18:07:13.910331 dockerd[1719]: time="2024-09-04T18:07:13.910142494Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 4 18:07:13.910866 dockerd[1719]: time="2024-09-04T18:07:13.910449406Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 4 18:07:13.910866 dockerd[1719]: time="2024-09-04T18:07:13.910761597Z" level=info msg="Daemon has completed initialization" Sep 4 18:07:13.972727 dockerd[1719]: time="2024-09-04T18:07:13.972426565Z" level=info msg="API listen on /run/docker.sock" Sep 4 18:07:13.973553 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 4 18:07:14.334881 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck445519888-merged.mount: Deactivated successfully. Sep 4 18:07:15.810782 containerd[1450]: time="2024-09-04T18:07:15.810716087Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.8\"" Sep 4 18:07:16.571266 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1938391354.mount: Deactivated successfully. Sep 4 18:07:16.913689 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 4 18:07:16.923477 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 18:07:17.068294 (kubelet)[1909]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 18:07:17.068842 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 18:07:17.844279 kubelet[1909]: E0904 18:07:17.844147 1909 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 18:07:17.850256 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 18:07:17.850727 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 18:07:19.172465 containerd[1450]: time="2024-09-04T18:07:19.172311427Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 18:07:19.173681 containerd[1450]: time="2024-09-04T18:07:19.173625358Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.8: active requests=0, bytes read=35232957" Sep 4 18:07:19.174748 containerd[1450]: time="2024-09-04T18:07:19.174708903Z" level=info msg="ImageCreate event name:\"sha256:ea7e9c4af6a6f4f2fc0b86f81d102bf60167b3cbd4ce7d1545833b0283ab80b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 18:07:19.178186 containerd[1450]: time="2024-09-04T18:07:19.178122798Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6f72fa926c9b05e10629fe1a092fd28dcd65b4fdfd0cc7bd55f85a57a6ba1fa5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 18:07:19.179564 containerd[1450]: time="2024-09-04T18:07:19.179335686Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.8\" with image id \"sha256:ea7e9c4af6a6f4f2fc0b86f81d102bf60167b3cbd4ce7d1545833b0283ab80b7\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6f72fa926c9b05e10629fe1a092fd28dcd65b4fdfd0cc7bd55f85a57a6ba1fa5\", size \"35229749\" in 3.368521705s" Sep 4 18:07:19.179564 containerd[1450]: time="2024-09-04T18:07:19.179389228Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.8\" returns image reference \"sha256:ea7e9c4af6a6f4f2fc0b86f81d102bf60167b3cbd4ce7d1545833b0283ab80b7\"" Sep 4 18:07:19.208335 containerd[1450]: time="2024-09-04T18:07:19.208281817Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.8\"" Sep 4 18:07:23.064000 containerd[1450]: time="2024-09-04T18:07:23.063876385Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 18:07:23.066043 containerd[1450]: time="2024-09-04T18:07:23.065748785Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.8: active requests=0, bytes read=32206214" Sep 4 18:07:23.067180 containerd[1450]: time="2024-09-04T18:07:23.067114078Z" level=info msg="ImageCreate event name:\"sha256:b469e8ed7312f97f28340218ee5884606f9998ad73d3692a6078a2692253589a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 18:07:23.070736 containerd[1450]: time="2024-09-04T18:07:23.070635886Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6f27d63ded20614c68554b477cd7a78eda78a498a92bfe8935cf964ca5b74d0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 18:07:23.072364 containerd[1450]: time="2024-09-04T18:07:23.071803628Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.8\" with image id \"sha256:b469e8ed7312f97f28340218ee5884606f9998ad73d3692a6078a2692253589a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6f27d63ded20614c68554b477cd7a78eda78a498a92bfe8935cf964ca5b74d0b\", size \"33756152\" in 3.863476705s" Sep 4 18:07:23.072364 containerd[1450]: time="2024-09-04T18:07:23.071838563Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.8\" returns image reference \"sha256:b469e8ed7312f97f28340218ee5884606f9998ad73d3692a6078a2692253589a\"" Sep 4 18:07:23.099790 containerd[1450]: time="2024-09-04T18:07:23.099729836Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.8\"" Sep 4 18:07:25.060792 containerd[1450]: time="2024-09-04T18:07:25.060288122Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 18:07:25.064502 containerd[1450]: time="2024-09-04T18:07:25.064395279Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.8: active requests=0, bytes read=17321515" Sep 4 18:07:25.065129 containerd[1450]: time="2024-09-04T18:07:25.065057666Z" level=info msg="ImageCreate event name:\"sha256:e932331104a0d08ad33e8c298f0c2a9a23378869c8fc0915df299b611c196f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 18:07:25.073808 containerd[1450]: time="2024-09-04T18:07:25.073639783Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:da74a66675d95e39ec25da5e70729da746d0fa0b15ee0da872ac980519bc28bd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 18:07:25.077512 containerd[1450]: time="2024-09-04T18:07:25.077222522Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.8\" with image id \"sha256:e932331104a0d08ad33e8c298f0c2a9a23378869c8fc0915df299b611c196f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:da74a66675d95e39ec25da5e70729da746d0fa0b15ee0da872ac980519bc28bd\", size \"18871471\" in 1.977427443s" Sep 4 18:07:25.077512 containerd[1450]: time="2024-09-04T18:07:25.077302423Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.8\" returns image reference \"sha256:e932331104a0d08ad33e8c298f0c2a9a23378869c8fc0915df299b611c196f21\"" Sep 4 18:07:25.136750 containerd[1450]: time="2024-09-04T18:07:25.136489367Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.8\"" Sep 4 18:07:25.252528 update_engine[1435]: I0904 18:07:25.250052 1435 update_attempter.cc:509] Updating boot flags... Sep 4 18:07:25.323850 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1961) Sep 4 18:07:25.383715 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1962) Sep 4 18:07:25.464680 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1962) Sep 4 18:07:27.913177 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 4 18:07:27.920893 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 18:07:28.004797 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1886039773.mount: Deactivated successfully. Sep 4 18:07:28.046568 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 18:07:28.051009 (kubelet)[1981]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 18:07:28.377806 kubelet[1981]: E0904 18:07:28.377505 1981 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 18:07:28.382277 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 18:07:28.382571 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 18:07:29.228573 containerd[1450]: time="2024-09-04T18:07:29.228390002Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 18:07:29.231759 containerd[1450]: time="2024-09-04T18:07:29.231583452Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.8: active requests=0, bytes read=28600388" Sep 4 18:07:29.233841 containerd[1450]: time="2024-09-04T18:07:29.233622279Z" level=info msg="ImageCreate event name:\"sha256:b6e10835ec72a48862d901a23b7c4c924300c3f6cfe89cd6031533b67e1f4e54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 18:07:29.239466 containerd[1450]: time="2024-09-04T18:07:29.239373652Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:559a093080f70ca863922f5e4bb90d6926d52653a91edb5b72c685ebb65f1858\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 18:07:29.242006 containerd[1450]: time="2024-09-04T18:07:29.240863226Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.8\" with image id \"sha256:b6e10835ec72a48862d901a23b7c4c924300c3f6cfe89cd6031533b67e1f4e54\", repo tag \"registry.k8s.io/kube-proxy:v1.29.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:559a093080f70ca863922f5e4bb90d6926d52653a91edb5b72c685ebb65f1858\", size \"28599399\" in 4.104250767s" Sep 4 18:07:29.242006 containerd[1450]: time="2024-09-04T18:07:29.240957222Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.8\" returns image reference \"sha256:b6e10835ec72a48862d901a23b7c4c924300c3f6cfe89cd6031533b67e1f4e54\"" Sep 4 18:07:29.291172 containerd[1450]: time="2024-09-04T18:07:29.290892094Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Sep 4 18:07:30.107565 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount751192750.mount: Deactivated successfully. Sep 4 18:07:31.817073 containerd[1450]: time="2024-09-04T18:07:31.816127812Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 18:07:31.820109 containerd[1450]: time="2024-09-04T18:07:31.819239205Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Sep 4 18:07:31.822729 containerd[1450]: time="2024-09-04T18:07:31.821353733Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 18:07:31.831130 containerd[1450]: time="2024-09-04T18:07:31.831052267Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 18:07:31.834176 containerd[1450]: time="2024-09-04T18:07:31.834096633Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.543150648s" Sep 4 18:07:31.834176 containerd[1450]: time="2024-09-04T18:07:31.834182836Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Sep 4 18:07:31.901380 containerd[1450]: time="2024-09-04T18:07:31.901317252Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Sep 4 18:07:32.597644 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount759064924.mount: Deactivated successfully. Sep 4 18:07:32.613337 containerd[1450]: time="2024-09-04T18:07:32.613094571Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 18:07:32.615317 containerd[1450]: time="2024-09-04T18:07:32.615199259Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Sep 4 18:07:32.616995 containerd[1450]: time="2024-09-04T18:07:32.616879009Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 18:07:32.623787 containerd[1450]: time="2024-09-04T18:07:32.623592134Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 18:07:32.626306 containerd[1450]: time="2024-09-04T18:07:32.625579201Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 723.724258ms" Sep 4 18:07:32.626306 containerd[1450]: time="2024-09-04T18:07:32.625651357Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Sep 4 18:07:32.677372 containerd[1450]: time="2024-09-04T18:07:32.676941822Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Sep 4 18:07:33.799577 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount543044.mount: Deactivated successfully. Sep 4 18:07:37.097464 containerd[1450]: time="2024-09-04T18:07:37.097396260Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 18:07:37.099495 containerd[1450]: time="2024-09-04T18:07:37.099457023Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651633" Sep 4 18:07:37.099731 containerd[1450]: time="2024-09-04T18:07:37.099707955Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 18:07:37.103724 containerd[1450]: time="2024-09-04T18:07:37.103698113Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 18:07:37.105214 containerd[1450]: time="2024-09-04T18:07:37.105153518Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 4.428134792s" Sep 4 18:07:37.105272 containerd[1450]: time="2024-09-04T18:07:37.105219092Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Sep 4 18:07:38.413137 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Sep 4 18:07:38.419976 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 18:07:38.657834 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 18:07:38.665361 (kubelet)[2169]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 18:07:38.791142 kubelet[2169]: E0904 18:07:38.791026 2169 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 18:07:38.798421 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 18:07:38.798812 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 18:07:41.514940 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 18:07:41.526181 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 18:07:41.560579 systemd[1]: Reloading requested from client PID 2184 ('systemctl') (unit session-11.scope)... Sep 4 18:07:41.560598 systemd[1]: Reloading... Sep 4 18:07:41.676733 zram_generator::config[2218]: No configuration found. Sep 4 18:07:42.218026 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 18:07:42.307211 systemd[1]: Reloading finished in 746 ms. Sep 4 18:07:42.376280 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 18:07:42.379683 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 18:07:42.388052 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 18:07:42.388386 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 18:07:42.395263 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 18:07:42.762957 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 18:07:42.782028 (kubelet)[2290]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 18:07:42.979006 kubelet[2290]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 18:07:42.979006 kubelet[2290]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 18:07:42.979006 kubelet[2290]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 18:07:42.980337 kubelet[2290]: I0904 18:07:42.979105 2290 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 18:07:43.538830 kubelet[2290]: I0904 18:07:43.538782 2290 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Sep 4 18:07:43.538830 kubelet[2290]: I0904 18:07:43.538836 2290 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 18:07:43.539345 kubelet[2290]: I0904 18:07:43.539314 2290 server.go:919] "Client rotation is on, will bootstrap in background" Sep 4 18:07:43.691271 kubelet[2290]: I0904 18:07:43.691143 2290 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 18:07:43.699470 kubelet[2290]: E0904 18:07:43.699303 2290 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.134:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.134:6443: connect: connection refused Sep 4 18:07:43.744020 kubelet[2290]: I0904 18:07:43.743891 2290 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 18:07:43.744876 kubelet[2290]: I0904 18:07:43.744783 2290 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 18:07:43.753424 kubelet[2290]: I0904 18:07:43.753298 2290 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Sep 4 18:07:43.753424 kubelet[2290]: I0904 18:07:43.753418 2290 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 18:07:43.753963 kubelet[2290]: I0904 18:07:43.753460 2290 container_manager_linux.go:301] "Creating device plugin manager" Sep 4 18:07:43.763517 kubelet[2290]: I0904 18:07:43.763419 2290 state_mem.go:36] "Initialized new in-memory state store" Sep 4 18:07:43.763887 kubelet[2290]: I0904 18:07:43.763843 2290 kubelet.go:396] "Attempting to sync node with API server" Sep 4 18:07:43.763887 kubelet[2290]: I0904 18:07:43.763916 2290 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 18:07:43.766708 kubelet[2290]: I0904 18:07:43.764002 2290 kubelet.go:312] "Adding apiserver pod source" Sep 4 18:07:43.766708 kubelet[2290]: I0904 18:07:43.764055 2290 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 18:07:43.779866 kubelet[2290]: I0904 18:07:43.779816 2290 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.20" apiVersion="v1" Sep 4 18:07:43.782220 kubelet[2290]: W0904 18:07:43.782099 2290 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.24.4.134:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.134:6443: connect: connection refused Sep 4 18:07:43.782365 kubelet[2290]: E0904 18:07:43.782237 2290 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.134:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.134:6443: connect: connection refused Sep 4 18:07:43.782522 kubelet[2290]: W0904 18:07:43.782426 2290 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.24.4.134:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4054-1-0-c-4d101ae770.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.134:6443: connect: connection refused Sep 4 18:07:43.782638 kubelet[2290]: E0904 18:07:43.782529 2290 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.134:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4054-1-0-c-4d101ae770.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.134:6443: connect: connection refused Sep 4 18:07:43.810932 kubelet[2290]: I0904 18:07:43.810590 2290 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 18:07:43.810932 kubelet[2290]: W0904 18:07:43.810779 2290 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 4 18:07:43.812652 kubelet[2290]: I0904 18:07:43.812600 2290 server.go:1256] "Started kubelet" Sep 4 18:07:43.815036 kubelet[2290]: I0904 18:07:43.813306 2290 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 18:07:43.816139 kubelet[2290]: I0904 18:07:43.816092 2290 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 18:07:43.816403 kubelet[2290]: I0904 18:07:43.816344 2290 server.go:461] "Adding debug handlers to kubelet server" Sep 4 18:07:43.818951 kubelet[2290]: I0904 18:07:43.818874 2290 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 18:07:43.819294 kubelet[2290]: I0904 18:07:43.819243 2290 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 18:07:43.828419 kubelet[2290]: I0904 18:07:43.827995 2290 volume_manager.go:291] "Starting Kubelet Volume Manager" Sep 4 18:07:43.835708 kubelet[2290]: I0904 18:07:43.835320 2290 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Sep 4 18:07:43.835708 kubelet[2290]: I0904 18:07:43.835465 2290 reconciler_new.go:29] "Reconciler: start to sync state" Sep 4 18:07:43.836885 kubelet[2290]: E0904 18:07:43.836853 2290 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4054-1-0-c-4d101ae770.novalocal?timeout=10s\": dial tcp 172.24.4.134:6443: connect: connection refused" interval="200ms" Sep 4 18:07:43.837532 kubelet[2290]: W0904 18:07:43.837219 2290 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.24.4.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.134:6443: connect: connection refused Sep 4 18:07:43.837532 kubelet[2290]: E0904 18:07:43.837419 2290 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.134:6443: connect: connection refused Sep 4 18:07:43.864996 kubelet[2290]: I0904 18:07:43.864948 2290 factory.go:221] Registration of the systemd container factory successfully Sep 4 18:07:43.866712 kubelet[2290]: I0904 18:07:43.865431 2290 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 18:07:43.869763 kubelet[2290]: I0904 18:07:43.869730 2290 factory.go:221] Registration of the containerd container factory successfully Sep 4 18:07:43.873272 kubelet[2290]: E0904 18:07:43.873207 2290 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.134:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.134:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4054-1-0-c-4d101ae770.novalocal.17f21cca9efb0ff1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4054-1-0-c-4d101ae770.novalocal,UID:ci-4054-1-0-c-4d101ae770.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4054-1-0-c-4d101ae770.novalocal,},FirstTimestamp:2024-09-04 18:07:43.812546545 +0000 UTC m=+1.021034967,LastTimestamp:2024-09-04 18:07:43.812546545 +0000 UTC m=+1.021034967,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4054-1-0-c-4d101ae770.novalocal,}" Sep 4 18:07:43.906798 kubelet[2290]: I0904 18:07:43.904049 2290 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 18:07:43.910163 kubelet[2290]: I0904 18:07:43.910129 2290 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 18:07:43.910163 kubelet[2290]: I0904 18:07:43.910170 2290 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 18:07:43.910353 kubelet[2290]: I0904 18:07:43.910201 2290 kubelet.go:2329] "Starting kubelet main sync loop" Sep 4 18:07:43.910353 kubelet[2290]: E0904 18:07:43.910255 2290 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 18:07:43.920722 kubelet[2290]: W0904 18:07:43.920179 2290 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.24.4.134:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.134:6443: connect: connection refused Sep 4 18:07:43.920722 kubelet[2290]: E0904 18:07:43.920253 2290 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.134:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.134:6443: connect: connection refused Sep 4 18:07:43.931117 kubelet[2290]: I0904 18:07:43.931078 2290 kubelet_node_status.go:73] "Attempting to register node" node="ci-4054-1-0-c-4d101ae770.novalocal" Sep 4 18:07:43.931474 kubelet[2290]: E0904 18:07:43.931456 2290 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.134:6443/api/v1/nodes\": dial tcp 172.24.4.134:6443: connect: connection refused" node="ci-4054-1-0-c-4d101ae770.novalocal" Sep 4 18:07:43.932161 kubelet[2290]: I0904 18:07:43.932143 2290 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 18:07:43.932161 kubelet[2290]: I0904 18:07:43.932163 2290 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 18:07:43.932251 kubelet[2290]: I0904 18:07:43.932181 2290 state_mem.go:36] "Initialized new in-memory state store" Sep 4 18:07:43.938299 kubelet[2290]: I0904 18:07:43.938274 2290 policy_none.go:49] "None policy: Start" Sep 4 18:07:43.939012 kubelet[2290]: I0904 18:07:43.938905 2290 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 4 18:07:43.939012 kubelet[2290]: I0904 18:07:43.938984 2290 state_mem.go:35] "Initializing new in-memory state store" Sep 4 18:07:43.947802 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 4 18:07:43.956760 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 4 18:07:43.959831 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 4 18:07:43.972405 kubelet[2290]: I0904 18:07:43.972377 2290 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 18:07:43.973368 kubelet[2290]: I0904 18:07:43.972702 2290 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 18:07:43.981152 kubelet[2290]: E0904 18:07:43.980684 2290 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4054-1-0-c-4d101ae770.novalocal\" not found" Sep 4 18:07:44.011420 kubelet[2290]: I0904 18:07:44.011364 2290 topology_manager.go:215] "Topology Admit Handler" podUID="833ef7199f15d1fe7b6fdbc50922ac6d" podNamespace="kube-system" podName="kube-apiserver-ci-4054-1-0-c-4d101ae770.novalocal" Sep 4 18:07:44.013581 kubelet[2290]: I0904 18:07:44.013453 2290 topology_manager.go:215] "Topology Admit Handler" podUID="8d4edf2c5506319dd2c3b6acb14dddf5" podNamespace="kube-system" podName="kube-controller-manager-ci-4054-1-0-c-4d101ae770.novalocal" Sep 4 18:07:44.015384 kubelet[2290]: I0904 18:07:44.015095 2290 topology_manager.go:215] "Topology Admit Handler" podUID="00c0ac506a8505308f5473de6569520a" podNamespace="kube-system" podName="kube-scheduler-ci-4054-1-0-c-4d101ae770.novalocal" Sep 4 18:07:44.024048 systemd[1]: Created slice kubepods-burstable-pod833ef7199f15d1fe7b6fdbc50922ac6d.slice - libcontainer container kubepods-burstable-pod833ef7199f15d1fe7b6fdbc50922ac6d.slice. Sep 4 18:07:44.038405 kubelet[2290]: E0904 18:07:44.038293 2290 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4054-1-0-c-4d101ae770.novalocal?timeout=10s\": dial tcp 172.24.4.134:6443: connect: connection refused" interval="400ms" Sep 4 18:07:44.039937 systemd[1]: Created slice kubepods-burstable-pod8d4edf2c5506319dd2c3b6acb14dddf5.slice - libcontainer container kubepods-burstable-pod8d4edf2c5506319dd2c3b6acb14dddf5.slice. Sep 4 18:07:44.054589 systemd[1]: Created slice kubepods-burstable-pod00c0ac506a8505308f5473de6569520a.slice - libcontainer container kubepods-burstable-pod00c0ac506a8505308f5473de6569520a.slice. Sep 4 18:07:44.136613 kubelet[2290]: I0904 18:07:44.136394 2290 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8d4edf2c5506319dd2c3b6acb14dddf5-kubeconfig\") pod \"kube-controller-manager-ci-4054-1-0-c-4d101ae770.novalocal\" (UID: \"8d4edf2c5506319dd2c3b6acb14dddf5\") " pod="kube-system/kube-controller-manager-ci-4054-1-0-c-4d101ae770.novalocal" Sep 4 18:07:44.136613 kubelet[2290]: I0904 18:07:44.136499 2290 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8d4edf2c5506319dd2c3b6acb14dddf5-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4054-1-0-c-4d101ae770.novalocal\" (UID: \"8d4edf2c5506319dd2c3b6acb14dddf5\") " pod="kube-system/kube-controller-manager-ci-4054-1-0-c-4d101ae770.novalocal" Sep 4 18:07:44.136613 kubelet[2290]: I0904 18:07:44.136561 2290 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/00c0ac506a8505308f5473de6569520a-kubeconfig\") pod \"kube-scheduler-ci-4054-1-0-c-4d101ae770.novalocal\" (UID: \"00c0ac506a8505308f5473de6569520a\") " pod="kube-system/kube-scheduler-ci-4054-1-0-c-4d101ae770.novalocal" Sep 4 18:07:44.139537 kubelet[2290]: I0904 18:07:44.137622 2290 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/833ef7199f15d1fe7b6fdbc50922ac6d-ca-certs\") pod \"kube-apiserver-ci-4054-1-0-c-4d101ae770.novalocal\" (UID: \"833ef7199f15d1fe7b6fdbc50922ac6d\") " pod="kube-system/kube-apiserver-ci-4054-1-0-c-4d101ae770.novalocal" Sep 4 18:07:44.139537 kubelet[2290]: I0904 18:07:44.137787 2290 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/833ef7199f15d1fe7b6fdbc50922ac6d-k8s-certs\") pod \"kube-apiserver-ci-4054-1-0-c-4d101ae770.novalocal\" (UID: \"833ef7199f15d1fe7b6fdbc50922ac6d\") " pod="kube-system/kube-apiserver-ci-4054-1-0-c-4d101ae770.novalocal" Sep 4 18:07:44.139537 kubelet[2290]: I0904 18:07:44.137897 2290 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8d4edf2c5506319dd2c3b6acb14dddf5-ca-certs\") pod \"kube-controller-manager-ci-4054-1-0-c-4d101ae770.novalocal\" (UID: \"8d4edf2c5506319dd2c3b6acb14dddf5\") " pod="kube-system/kube-controller-manager-ci-4054-1-0-c-4d101ae770.novalocal" Sep 4 18:07:44.139537 kubelet[2290]: I0904 18:07:44.137964 2290 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/833ef7199f15d1fe7b6fdbc50922ac6d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4054-1-0-c-4d101ae770.novalocal\" (UID: \"833ef7199f15d1fe7b6fdbc50922ac6d\") " pod="kube-system/kube-apiserver-ci-4054-1-0-c-4d101ae770.novalocal" Sep 4 18:07:44.139537 kubelet[2290]: I0904 18:07:44.138024 2290 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8d4edf2c5506319dd2c3b6acb14dddf5-flexvolume-dir\") pod \"kube-controller-manager-ci-4054-1-0-c-4d101ae770.novalocal\" (UID: \"8d4edf2c5506319dd2c3b6acb14dddf5\") " pod="kube-system/kube-controller-manager-ci-4054-1-0-c-4d101ae770.novalocal" Sep 4 18:07:44.140423 kubelet[2290]: I0904 18:07:44.138082 2290 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8d4edf2c5506319dd2c3b6acb14dddf5-k8s-certs\") pod \"kube-controller-manager-ci-4054-1-0-c-4d101ae770.novalocal\" (UID: \"8d4edf2c5506319dd2c3b6acb14dddf5\") " pod="kube-system/kube-controller-manager-ci-4054-1-0-c-4d101ae770.novalocal" Sep 4 18:07:44.140423 kubelet[2290]: I0904 18:07:44.138411 2290 kubelet_node_status.go:73] "Attempting to register node" node="ci-4054-1-0-c-4d101ae770.novalocal" Sep 4 18:07:44.140423 kubelet[2290]: E0904 18:07:44.139252 2290 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.134:6443/api/v1/nodes\": dial tcp 172.24.4.134:6443: connect: connection refused" node="ci-4054-1-0-c-4d101ae770.novalocal" Sep 4 18:07:44.340041 containerd[1450]: time="2024-09-04T18:07:44.339927418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4054-1-0-c-4d101ae770.novalocal,Uid:833ef7199f15d1fe7b6fdbc50922ac6d,Namespace:kube-system,Attempt:0,}" Sep 4 18:07:44.353201 containerd[1450]: time="2024-09-04T18:07:44.353109383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4054-1-0-c-4d101ae770.novalocal,Uid:8d4edf2c5506319dd2c3b6acb14dddf5,Namespace:kube-system,Attempt:0,}" Sep 4 18:07:44.369924 containerd[1450]: time="2024-09-04T18:07:44.369637421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4054-1-0-c-4d101ae770.novalocal,Uid:00c0ac506a8505308f5473de6569520a,Namespace:kube-system,Attempt:0,}" Sep 4 18:07:44.439564 kubelet[2290]: E0904 18:07:44.439511 2290 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4054-1-0-c-4d101ae770.novalocal?timeout=10s\": dial tcp 172.24.4.134:6443: connect: connection refused" interval="800ms" Sep 4 18:07:44.543331 kubelet[2290]: I0904 18:07:44.543262 2290 kubelet_node_status.go:73] "Attempting to register node" node="ci-4054-1-0-c-4d101ae770.novalocal" Sep 4 18:07:44.544276 kubelet[2290]: E0904 18:07:44.544225 2290 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.134:6443/api/v1/nodes\": dial tcp 172.24.4.134:6443: connect: connection refused" node="ci-4054-1-0-c-4d101ae770.novalocal" Sep 4 18:07:44.910973 kubelet[2290]: W0904 18:07:44.910754 2290 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.24.4.134:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4054-1-0-c-4d101ae770.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.134:6443: connect: connection refused Sep 4 18:07:44.910973 kubelet[2290]: E0904 18:07:44.910847 2290 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.134:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4054-1-0-c-4d101ae770.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.134:6443: connect: connection refused Sep 4 18:07:45.033248 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount710973001.mount: Deactivated successfully. Sep 4 18:07:45.045996 containerd[1450]: time="2024-09-04T18:07:45.045827639Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 18:07:45.048142 containerd[1450]: time="2024-09-04T18:07:45.048048639Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 18:07:45.050413 containerd[1450]: time="2024-09-04T18:07:45.050285820Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 18:07:45.051381 containerd[1450]: time="2024-09-04T18:07:45.050955577Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 18:07:45.052346 containerd[1450]: time="2024-09-04T18:07:45.052118131Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 18:07:45.053301 containerd[1450]: time="2024-09-04T18:07:45.053186606Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 18:07:45.054212 containerd[1450]: time="2024-09-04T18:07:45.054077579Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Sep 4 18:07:45.060716 containerd[1450]: time="2024-09-04T18:07:45.060611178Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 18:07:45.065219 containerd[1450]: time="2024-09-04T18:07:45.065148870Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 724.941905ms" Sep 4 18:07:45.072055 containerd[1450]: time="2024-09-04T18:07:45.071980016Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 718.411572ms" Sep 4 18:07:45.080178 kubelet[2290]: W0904 18:07:45.079996 2290 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.24.4.134:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.134:6443: connect: connection refused Sep 4 18:07:45.080178 kubelet[2290]: E0904 18:07:45.080129 2290 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.134:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.134:6443: connect: connection refused Sep 4 18:07:45.095988 containerd[1450]: time="2024-09-04T18:07:45.095520611Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 725.658858ms" Sep 4 18:07:45.238887 kubelet[2290]: W0904 18:07:45.238750 2290 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.24.4.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.134:6443: connect: connection refused Sep 4 18:07:45.238887 kubelet[2290]: E0904 18:07:45.238839 2290 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.134:6443: connect: connection refused Sep 4 18:07:45.241048 kubelet[2290]: E0904 18:07:45.240979 2290 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4054-1-0-c-4d101ae770.novalocal?timeout=10s\": dial tcp 172.24.4.134:6443: connect: connection refused" interval="1.6s" Sep 4 18:07:45.348909 kubelet[2290]: I0904 18:07:45.348787 2290 kubelet_node_status.go:73] "Attempting to register node" node="ci-4054-1-0-c-4d101ae770.novalocal" Sep 4 18:07:45.351699 kubelet[2290]: E0904 18:07:45.350056 2290 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.134:6443/api/v1/nodes\": dial tcp 172.24.4.134:6443: connect: connection refused" node="ci-4054-1-0-c-4d101ae770.novalocal" Sep 4 18:07:45.379823 kubelet[2290]: W0904 18:07:45.379637 2290 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.24.4.134:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.134:6443: connect: connection refused Sep 4 18:07:45.379823 kubelet[2290]: E0904 18:07:45.379831 2290 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.134:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.134:6443: connect: connection refused Sep 4 18:07:45.616031 containerd[1450]: time="2024-09-04T18:07:45.615372353Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 18:07:45.616031 containerd[1450]: time="2024-09-04T18:07:45.615458525Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 18:07:45.616031 containerd[1450]: time="2024-09-04T18:07:45.615477961Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 18:07:45.616031 containerd[1450]: time="2024-09-04T18:07:45.615574053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 18:07:45.635613 containerd[1450]: time="2024-09-04T18:07:45.635128501Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 18:07:45.635613 containerd[1450]: time="2024-09-04T18:07:45.635256811Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 18:07:45.635613 containerd[1450]: time="2024-09-04T18:07:45.635302127Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 18:07:45.636237 containerd[1450]: time="2024-09-04T18:07:45.636162843Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 18:07:45.663886 containerd[1450]: time="2024-09-04T18:07:45.663756618Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 18:07:45.664006 containerd[1450]: time="2024-09-04T18:07:45.663916869Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 18:07:45.664094 containerd[1450]: time="2024-09-04T18:07:45.664026545Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 18:07:45.664829 containerd[1450]: time="2024-09-04T18:07:45.664462013Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 18:07:45.666882 systemd[1]: Started cri-containerd-548f120cefdf0aa3759dc84ad41bca459235939fd28ac73a799b483ed5effaf2.scope - libcontainer container 548f120cefdf0aa3759dc84ad41bca459235939fd28ac73a799b483ed5effaf2. Sep 4 18:07:45.672392 systemd[1]: Started cri-containerd-084dba85d1ef605ce432827141dac13599cde1b85cc55ee5c505a1914bc59ac9.scope - libcontainer container 084dba85d1ef605ce432827141dac13599cde1b85cc55ee5c505a1914bc59ac9. Sep 4 18:07:45.705442 systemd[1]: Started cri-containerd-432a86e17ea644cf8990856fbcbebf8dcd758b49462b1c37565a1e2530f4e7d7.scope - libcontainer container 432a86e17ea644cf8990856fbcbebf8dcd758b49462b1c37565a1e2530f4e7d7. Sep 4 18:07:45.748966 containerd[1450]: time="2024-09-04T18:07:45.748898164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4054-1-0-c-4d101ae770.novalocal,Uid:8d4edf2c5506319dd2c3b6acb14dddf5,Namespace:kube-system,Attempt:0,} returns sandbox id \"084dba85d1ef605ce432827141dac13599cde1b85cc55ee5c505a1914bc59ac9\"" Sep 4 18:07:45.753785 containerd[1450]: time="2024-09-04T18:07:45.753748743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4054-1-0-c-4d101ae770.novalocal,Uid:833ef7199f15d1fe7b6fdbc50922ac6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"548f120cefdf0aa3759dc84ad41bca459235939fd28ac73a799b483ed5effaf2\"" Sep 4 18:07:45.757346 containerd[1450]: time="2024-09-04T18:07:45.757109013Z" level=info msg="CreateContainer within sandbox \"084dba85d1ef605ce432827141dac13599cde1b85cc55ee5c505a1914bc59ac9\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 4 18:07:45.758529 containerd[1450]: time="2024-09-04T18:07:45.758445402Z" level=info msg="CreateContainer within sandbox \"548f120cefdf0aa3759dc84ad41bca459235939fd28ac73a799b483ed5effaf2\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 4 18:07:45.770272 kubelet[2290]: E0904 18:07:45.770230 2290 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.134:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.134:6443: connect: connection refused Sep 4 18:07:45.783116 containerd[1450]: time="2024-09-04T18:07:45.783060744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4054-1-0-c-4d101ae770.novalocal,Uid:00c0ac506a8505308f5473de6569520a,Namespace:kube-system,Attempt:0,} returns sandbox id \"432a86e17ea644cf8990856fbcbebf8dcd758b49462b1c37565a1e2530f4e7d7\"" Sep 4 18:07:45.790050 containerd[1450]: time="2024-09-04T18:07:45.790004062Z" level=info msg="CreateContainer within sandbox \"432a86e17ea644cf8990856fbcbebf8dcd758b49462b1c37565a1e2530f4e7d7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 4 18:07:45.815720 containerd[1450]: time="2024-09-04T18:07:45.815609633Z" level=info msg="CreateContainer within sandbox \"084dba85d1ef605ce432827141dac13599cde1b85cc55ee5c505a1914bc59ac9\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1bdd043f6a9547110fe55de28e176874fdcbf6e0099fd0c4b769264752a29348\"" Sep 4 18:07:45.816956 containerd[1450]: time="2024-09-04T18:07:45.816739505Z" level=info msg="StartContainer for \"1bdd043f6a9547110fe55de28e176874fdcbf6e0099fd0c4b769264752a29348\"" Sep 4 18:07:45.822535 containerd[1450]: time="2024-09-04T18:07:45.822382231Z" level=info msg="CreateContainer within sandbox \"548f120cefdf0aa3759dc84ad41bca459235939fd28ac73a799b483ed5effaf2\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6385eac6ccffbffe1a49f011fc85d3dd8cbf01bc73e706a587daada2e2f5ac9e\"" Sep 4 18:07:45.822982 containerd[1450]: time="2024-09-04T18:07:45.822926774Z" level=info msg="StartContainer for \"6385eac6ccffbffe1a49f011fc85d3dd8cbf01bc73e706a587daada2e2f5ac9e\"" Sep 4 18:07:45.838729 containerd[1450]: time="2024-09-04T18:07:45.838158605Z" level=info msg="CreateContainer within sandbox \"432a86e17ea644cf8990856fbcbebf8dcd758b49462b1c37565a1e2530f4e7d7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ce6cc0cfffb64335becc65faf4bb3ca8aa911f04dfa8c585517a213a00f9457a\"" Sep 4 18:07:45.839614 containerd[1450]: time="2024-09-04T18:07:45.839488552Z" level=info msg="StartContainer for \"ce6cc0cfffb64335becc65faf4bb3ca8aa911f04dfa8c585517a213a00f9457a\"" Sep 4 18:07:45.853885 systemd[1]: Started cri-containerd-1bdd043f6a9547110fe55de28e176874fdcbf6e0099fd0c4b769264752a29348.scope - libcontainer container 1bdd043f6a9547110fe55de28e176874fdcbf6e0099fd0c4b769264752a29348. Sep 4 18:07:45.863835 systemd[1]: Started cri-containerd-6385eac6ccffbffe1a49f011fc85d3dd8cbf01bc73e706a587daada2e2f5ac9e.scope - libcontainer container 6385eac6ccffbffe1a49f011fc85d3dd8cbf01bc73e706a587daada2e2f5ac9e. Sep 4 18:07:45.892819 systemd[1]: Started cri-containerd-ce6cc0cfffb64335becc65faf4bb3ca8aa911f04dfa8c585517a213a00f9457a.scope - libcontainer container ce6cc0cfffb64335becc65faf4bb3ca8aa911f04dfa8c585517a213a00f9457a. Sep 4 18:07:45.953349 containerd[1450]: time="2024-09-04T18:07:45.953111971Z" level=info msg="StartContainer for \"1bdd043f6a9547110fe55de28e176874fdcbf6e0099fd0c4b769264752a29348\" returns successfully" Sep 4 18:07:45.957948 containerd[1450]: time="2024-09-04T18:07:45.957920912Z" level=info msg="StartContainer for \"6385eac6ccffbffe1a49f011fc85d3dd8cbf01bc73e706a587daada2e2f5ac9e\" returns successfully" Sep 4 18:07:45.984603 containerd[1450]: time="2024-09-04T18:07:45.984480675Z" level=info msg="StartContainer for \"ce6cc0cfffb64335becc65faf4bb3ca8aa911f04dfa8c585517a213a00f9457a\" returns successfully" Sep 4 18:07:46.317279 kubelet[2290]: E0904 18:07:46.317232 2290 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.134:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.134:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4054-1-0-c-4d101ae770.novalocal.17f21cca9efb0ff1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4054-1-0-c-4d101ae770.novalocal,UID:ci-4054-1-0-c-4d101ae770.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4054-1-0-c-4d101ae770.novalocal,},FirstTimestamp:2024-09-04 18:07:43.812546545 +0000 UTC m=+1.021034967,LastTimestamp:2024-09-04 18:07:43.812546545 +0000 UTC m=+1.021034967,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4054-1-0-c-4d101ae770.novalocal,}" Sep 4 18:07:46.957253 kubelet[2290]: I0904 18:07:46.955873 2290 kubelet_node_status.go:73] "Attempting to register node" node="ci-4054-1-0-c-4d101ae770.novalocal" Sep 4 18:07:48.239610 kubelet[2290]: E0904 18:07:48.239537 2290 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4054-1-0-c-4d101ae770.novalocal\" not found" node="ci-4054-1-0-c-4d101ae770.novalocal" Sep 4 18:07:48.285511 kubelet[2290]: I0904 18:07:48.285459 2290 kubelet_node_status.go:76] "Successfully registered node" node="ci-4054-1-0-c-4d101ae770.novalocal" Sep 4 18:07:48.777449 kubelet[2290]: I0904 18:07:48.777235 2290 apiserver.go:52] "Watching apiserver" Sep 4 18:07:48.836024 kubelet[2290]: I0904 18:07:48.835864 2290 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Sep 4 18:07:48.988857 kubelet[2290]: E0904 18:07:48.988768 2290 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4054-1-0-c-4d101ae770.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4054-1-0-c-4d101ae770.novalocal" Sep 4 18:07:51.426767 systemd[1]: Reloading requested from client PID 2563 ('systemctl') (unit session-11.scope)... Sep 4 18:07:51.426786 systemd[1]: Reloading... Sep 4 18:07:51.552716 zram_generator::config[2615]: No configuration found. Sep 4 18:07:51.689480 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 18:07:51.793039 systemd[1]: Reloading finished in 365 ms. Sep 4 18:07:51.844602 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 18:07:51.854119 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 18:07:51.854349 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 18:07:51.854401 systemd[1]: kubelet.service: Consumed 1.514s CPU time, 112.8M memory peak, 0B memory swap peak. Sep 4 18:07:51.866630 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 18:07:52.251103 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 18:07:52.253170 (kubelet)[2664]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 18:07:52.460796 kubelet[2664]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 18:07:52.460796 kubelet[2664]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 18:07:52.460796 kubelet[2664]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 18:07:52.461865 kubelet[2664]: I0904 18:07:52.460794 2664 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 18:07:52.471392 kubelet[2664]: I0904 18:07:52.471347 2664 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Sep 4 18:07:52.471392 kubelet[2664]: I0904 18:07:52.471383 2664 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 18:07:52.471699 kubelet[2664]: I0904 18:07:52.471679 2664 server.go:919] "Client rotation is on, will bootstrap in background" Sep 4 18:07:52.473720 kubelet[2664]: I0904 18:07:52.473693 2664 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 4 18:07:52.482334 kubelet[2664]: I0904 18:07:52.482300 2664 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 18:07:52.490238 kubelet[2664]: I0904 18:07:52.489205 2664 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 18:07:52.490238 kubelet[2664]: I0904 18:07:52.489404 2664 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 18:07:52.490238 kubelet[2664]: I0904 18:07:52.489598 2664 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Sep 4 18:07:52.490238 kubelet[2664]: I0904 18:07:52.489623 2664 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 18:07:52.490238 kubelet[2664]: I0904 18:07:52.489634 2664 container_manager_linux.go:301] "Creating device plugin manager" Sep 4 18:07:52.490238 kubelet[2664]: I0904 18:07:52.489686 2664 state_mem.go:36] "Initialized new in-memory state store" Sep 4 18:07:52.490582 kubelet[2664]: I0904 18:07:52.489776 2664 kubelet.go:396] "Attempting to sync node with API server" Sep 4 18:07:52.490582 kubelet[2664]: I0904 18:07:52.489791 2664 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 18:07:52.490582 kubelet[2664]: I0904 18:07:52.489820 2664 kubelet.go:312] "Adding apiserver pod source" Sep 4 18:07:52.490582 kubelet[2664]: I0904 18:07:52.489833 2664 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 18:07:52.496964 kubelet[2664]: I0904 18:07:52.495697 2664 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.20" apiVersion="v1" Sep 4 18:07:52.496964 kubelet[2664]: I0904 18:07:52.495931 2664 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 18:07:52.498380 kubelet[2664]: I0904 18:07:52.497190 2664 server.go:1256] "Started kubelet" Sep 4 18:07:52.500482 kubelet[2664]: I0904 18:07:52.499509 2664 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 18:07:52.504065 kubelet[2664]: I0904 18:07:52.502937 2664 volume_manager.go:291] "Starting Kubelet Volume Manager" Sep 4 18:07:52.505107 kubelet[2664]: I0904 18:07:52.504347 2664 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Sep 4 18:07:52.505107 kubelet[2664]: I0904 18:07:52.504521 2664 reconciler_new.go:29] "Reconciler: start to sync state" Sep 4 18:07:52.513689 kubelet[2664]: I0904 18:07:52.512021 2664 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 18:07:52.521562 kubelet[2664]: I0904 18:07:52.519432 2664 server.go:461] "Adding debug handlers to kubelet server" Sep 4 18:07:52.525307 kubelet[2664]: I0904 18:07:52.525277 2664 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 18:07:52.527353 kubelet[2664]: I0904 18:07:52.527194 2664 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 18:07:52.530057 kubelet[2664]: I0904 18:07:52.529833 2664 factory.go:221] Registration of the systemd container factory successfully Sep 4 18:07:52.530057 kubelet[2664]: I0904 18:07:52.529963 2664 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 18:07:52.544851 kubelet[2664]: I0904 18:07:52.544819 2664 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 18:07:52.550515 kubelet[2664]: I0904 18:07:52.549414 2664 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 18:07:52.550515 kubelet[2664]: I0904 18:07:52.549454 2664 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 18:07:52.550515 kubelet[2664]: I0904 18:07:52.549478 2664 kubelet.go:2329] "Starting kubelet main sync loop" Sep 4 18:07:52.550515 kubelet[2664]: E0904 18:07:52.549539 2664 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 18:07:52.562949 kubelet[2664]: I0904 18:07:52.562907 2664 factory.go:221] Registration of the containerd container factory successfully Sep 4 18:07:52.572334 kubelet[2664]: E0904 18:07:52.572298 2664 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 18:07:52.615105 kubelet[2664]: I0904 18:07:52.615067 2664 kubelet_node_status.go:73] "Attempting to register node" node="ci-4054-1-0-c-4d101ae770.novalocal" Sep 4 18:07:52.630533 kubelet[2664]: I0904 18:07:52.630488 2664 kubelet_node_status.go:112] "Node was previously registered" node="ci-4054-1-0-c-4d101ae770.novalocal" Sep 4 18:07:52.630687 kubelet[2664]: I0904 18:07:52.630573 2664 kubelet_node_status.go:76] "Successfully registered node" node="ci-4054-1-0-c-4d101ae770.novalocal" Sep 4 18:07:52.643338 kubelet[2664]: I0904 18:07:52.643304 2664 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 18:07:52.643338 kubelet[2664]: I0904 18:07:52.643337 2664 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 18:07:52.643338 kubelet[2664]: I0904 18:07:52.643355 2664 state_mem.go:36] "Initialized new in-memory state store" Sep 4 18:07:52.643732 kubelet[2664]: I0904 18:07:52.643536 2664 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 4 18:07:52.643732 kubelet[2664]: I0904 18:07:52.643566 2664 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 4 18:07:52.643732 kubelet[2664]: I0904 18:07:52.643575 2664 policy_none.go:49] "None policy: Start" Sep 4 18:07:52.646943 kubelet[2664]: I0904 18:07:52.646915 2664 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 4 18:07:52.647030 kubelet[2664]: I0904 18:07:52.646961 2664 state_mem.go:35] "Initializing new in-memory state store" Sep 4 18:07:52.648065 kubelet[2664]: I0904 18:07:52.647795 2664 state_mem.go:75] "Updated machine memory state" Sep 4 18:07:52.650542 kubelet[2664]: E0904 18:07:52.650501 2664 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 4 18:07:52.666542 kubelet[2664]: I0904 18:07:52.666498 2664 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 18:07:52.672075 kubelet[2664]: I0904 18:07:52.669113 2664 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 18:07:52.850996 kubelet[2664]: I0904 18:07:52.850807 2664 topology_manager.go:215] "Topology Admit Handler" podUID="833ef7199f15d1fe7b6fdbc50922ac6d" podNamespace="kube-system" podName="kube-apiserver-ci-4054-1-0-c-4d101ae770.novalocal" Sep 4 18:07:52.850996 kubelet[2664]: I0904 18:07:52.850934 2664 topology_manager.go:215] "Topology Admit Handler" podUID="8d4edf2c5506319dd2c3b6acb14dddf5" podNamespace="kube-system" podName="kube-controller-manager-ci-4054-1-0-c-4d101ae770.novalocal" Sep 4 18:07:52.850996 kubelet[2664]: I0904 18:07:52.850976 2664 topology_manager.go:215] "Topology Admit Handler" podUID="00c0ac506a8505308f5473de6569520a" podNamespace="kube-system" podName="kube-scheduler-ci-4054-1-0-c-4d101ae770.novalocal" Sep 4 18:07:52.862979 kubelet[2664]: W0904 18:07:52.862040 2664 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 4 18:07:52.863918 kubelet[2664]: W0904 18:07:52.863488 2664 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 4 18:07:52.864449 kubelet[2664]: W0904 18:07:52.864267 2664 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 4 18:07:52.906553 kubelet[2664]: I0904 18:07:52.906522 2664 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8d4edf2c5506319dd2c3b6acb14dddf5-ca-certs\") pod \"kube-controller-manager-ci-4054-1-0-c-4d101ae770.novalocal\" (UID: \"8d4edf2c5506319dd2c3b6acb14dddf5\") " pod="kube-system/kube-controller-manager-ci-4054-1-0-c-4d101ae770.novalocal" Sep 4 18:07:52.906995 kubelet[2664]: I0904 18:07:52.906780 2664 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/833ef7199f15d1fe7b6fdbc50922ac6d-ca-certs\") pod \"kube-apiserver-ci-4054-1-0-c-4d101ae770.novalocal\" (UID: \"833ef7199f15d1fe7b6fdbc50922ac6d\") " pod="kube-system/kube-apiserver-ci-4054-1-0-c-4d101ae770.novalocal" Sep 4 18:07:52.906995 kubelet[2664]: I0904 18:07:52.906841 2664 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/833ef7199f15d1fe7b6fdbc50922ac6d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4054-1-0-c-4d101ae770.novalocal\" (UID: \"833ef7199f15d1fe7b6fdbc50922ac6d\") " pod="kube-system/kube-apiserver-ci-4054-1-0-c-4d101ae770.novalocal" Sep 4 18:07:52.906995 kubelet[2664]: I0904 18:07:52.906869 2664 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8d4edf2c5506319dd2c3b6acb14dddf5-k8s-certs\") pod \"kube-controller-manager-ci-4054-1-0-c-4d101ae770.novalocal\" (UID: \"8d4edf2c5506319dd2c3b6acb14dddf5\") " pod="kube-system/kube-controller-manager-ci-4054-1-0-c-4d101ae770.novalocal" Sep 4 18:07:52.906995 kubelet[2664]: I0904 18:07:52.906896 2664 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8d4edf2c5506319dd2c3b6acb14dddf5-kubeconfig\") pod \"kube-controller-manager-ci-4054-1-0-c-4d101ae770.novalocal\" (UID: \"8d4edf2c5506319dd2c3b6acb14dddf5\") " pod="kube-system/kube-controller-manager-ci-4054-1-0-c-4d101ae770.novalocal" Sep 4 18:07:52.907742 kubelet[2664]: I0904 18:07:52.906946 2664 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8d4edf2c5506319dd2c3b6acb14dddf5-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4054-1-0-c-4d101ae770.novalocal\" (UID: \"8d4edf2c5506319dd2c3b6acb14dddf5\") " pod="kube-system/kube-controller-manager-ci-4054-1-0-c-4d101ae770.novalocal" Sep 4 18:07:52.907742 kubelet[2664]: I0904 18:07:52.906971 2664 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/00c0ac506a8505308f5473de6569520a-kubeconfig\") pod \"kube-scheduler-ci-4054-1-0-c-4d101ae770.novalocal\" (UID: \"00c0ac506a8505308f5473de6569520a\") " pod="kube-system/kube-scheduler-ci-4054-1-0-c-4d101ae770.novalocal" Sep 4 18:07:52.907742 kubelet[2664]: I0904 18:07:52.907278 2664 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/833ef7199f15d1fe7b6fdbc50922ac6d-k8s-certs\") pod \"kube-apiserver-ci-4054-1-0-c-4d101ae770.novalocal\" (UID: \"833ef7199f15d1fe7b6fdbc50922ac6d\") " pod="kube-system/kube-apiserver-ci-4054-1-0-c-4d101ae770.novalocal" Sep 4 18:07:52.908014 kubelet[2664]: I0904 18:07:52.907848 2664 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8d4edf2c5506319dd2c3b6acb14dddf5-flexvolume-dir\") pod \"kube-controller-manager-ci-4054-1-0-c-4d101ae770.novalocal\" (UID: \"8d4edf2c5506319dd2c3b6acb14dddf5\") " pod="kube-system/kube-controller-manager-ci-4054-1-0-c-4d101ae770.novalocal" Sep 4 18:07:53.495344 kubelet[2664]: I0904 18:07:53.495277 2664 apiserver.go:52] "Watching apiserver" Sep 4 18:07:53.505598 kubelet[2664]: I0904 18:07:53.505556 2664 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Sep 4 18:07:53.625338 kubelet[2664]: W0904 18:07:53.625243 2664 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 4 18:07:53.625591 kubelet[2664]: E0904 18:07:53.625377 2664 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4054-1-0-c-4d101ae770.novalocal\" already exists" pod="kube-system/kube-apiserver-ci-4054-1-0-c-4d101ae770.novalocal" Sep 4 18:07:53.663307 kubelet[2664]: I0904 18:07:53.663270 2664 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4054-1-0-c-4d101ae770.novalocal" podStartSLOduration=1.663226434 podStartE2EDuration="1.663226434s" podCreationTimestamp="2024-09-04 18:07:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 18:07:53.661222133 +0000 UTC m=+1.397007009" watchObservedRunningTime="2024-09-04 18:07:53.663226434 +0000 UTC m=+1.399011280" Sep 4 18:07:53.682710 kubelet[2664]: I0904 18:07:53.682451 2664 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4054-1-0-c-4d101ae770.novalocal" podStartSLOduration=1.68240546 podStartE2EDuration="1.68240546s" podCreationTimestamp="2024-09-04 18:07:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 18:07:53.681902677 +0000 UTC m=+1.417687543" watchObservedRunningTime="2024-09-04 18:07:53.68240546 +0000 UTC m=+1.418190306" Sep 4 18:07:53.682710 kubelet[2664]: I0904 18:07:53.682574 2664 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4054-1-0-c-4d101ae770.novalocal" podStartSLOduration=1.682552948 podStartE2EDuration="1.682552948s" podCreationTimestamp="2024-09-04 18:07:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 18:07:53.673042969 +0000 UTC m=+1.408827815" watchObservedRunningTime="2024-09-04 18:07:53.682552948 +0000 UTC m=+1.418337794" Sep 4 18:07:58.408840 sudo[1709]: pam_unix(sudo:session): session closed for user root Sep 4 18:07:58.603493 sshd[1706]: pam_unix(sshd:session): session closed for user core Sep 4 18:07:58.606985 systemd[1]: sshd@8-172.24.4.134:22-172.24.4.1:54098.service: Deactivated successfully. Sep 4 18:07:58.611335 systemd[1]: session-11.scope: Deactivated successfully. Sep 4 18:07:58.612275 systemd[1]: session-11.scope: Consumed 7.388s CPU time, 137.9M memory peak, 0B memory swap peak. Sep 4 18:07:58.614591 systemd-logind[1430]: Session 11 logged out. Waiting for processes to exit. Sep 4 18:07:58.617097 systemd-logind[1430]: Removed session 11. Sep 4 18:08:04.187489 kubelet[2664]: I0904 18:08:04.187302 2664 topology_manager.go:215] "Topology Admit Handler" podUID="76e696c3-a720-41d6-8ca9-9d1ae11e1d27" podNamespace="kube-system" podName="kube-proxy-2s5t2" Sep 4 18:08:04.200232 systemd[1]: Created slice kubepods-besteffort-pod76e696c3_a720_41d6_8ca9_9d1ae11e1d27.slice - libcontainer container kubepods-besteffort-pod76e696c3_a720_41d6_8ca9_9d1ae11e1d27.slice. Sep 4 18:08:04.223710 kubelet[2664]: I0904 18:08:04.223647 2664 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 4 18:08:04.224147 containerd[1450]: time="2024-09-04T18:08:04.224085561Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 4 18:08:04.224899 kubelet[2664]: I0904 18:08:04.224310 2664 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 4 18:08:04.288833 kubelet[2664]: I0904 18:08:04.288624 2664 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/76e696c3-a720-41d6-8ca9-9d1ae11e1d27-lib-modules\") pod \"kube-proxy-2s5t2\" (UID: \"76e696c3-a720-41d6-8ca9-9d1ae11e1d27\") " pod="kube-system/kube-proxy-2s5t2" Sep 4 18:08:04.288833 kubelet[2664]: I0904 18:08:04.288690 2664 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/76e696c3-a720-41d6-8ca9-9d1ae11e1d27-kube-proxy\") pod \"kube-proxy-2s5t2\" (UID: \"76e696c3-a720-41d6-8ca9-9d1ae11e1d27\") " pod="kube-system/kube-proxy-2s5t2" Sep 4 18:08:04.288833 kubelet[2664]: I0904 18:08:04.288721 2664 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/76e696c3-a720-41d6-8ca9-9d1ae11e1d27-xtables-lock\") pod \"kube-proxy-2s5t2\" (UID: \"76e696c3-a720-41d6-8ca9-9d1ae11e1d27\") " pod="kube-system/kube-proxy-2s5t2" Sep 4 18:08:04.288833 kubelet[2664]: I0904 18:08:04.288750 2664 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzf7g\" (UniqueName: \"kubernetes.io/projected/76e696c3-a720-41d6-8ca9-9d1ae11e1d27-kube-api-access-zzf7g\") pod \"kube-proxy-2s5t2\" (UID: \"76e696c3-a720-41d6-8ca9-9d1ae11e1d27\") " pod="kube-system/kube-proxy-2s5t2" Sep 4 18:08:04.513612 containerd[1450]: time="2024-09-04T18:08:04.512997608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2s5t2,Uid:76e696c3-a720-41d6-8ca9-9d1ae11e1d27,Namespace:kube-system,Attempt:0,}" Sep 4 18:08:04.587383 containerd[1450]: time="2024-09-04T18:08:04.587214853Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 18:08:04.587566 containerd[1450]: time="2024-09-04T18:08:04.587484518Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 18:08:04.587680 containerd[1450]: time="2024-09-04T18:08:04.587587211Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 18:08:04.588469 containerd[1450]: time="2024-09-04T18:08:04.588145478Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 18:08:04.611902 systemd[1]: Started cri-containerd-3b6100da21cb23222ea0385cfa2ca04f5e3840c6503c5fdc1a2a2620cca5962e.scope - libcontainer container 3b6100da21cb23222ea0385cfa2ca04f5e3840c6503c5fdc1a2a2620cca5962e. Sep 4 18:08:04.665000 containerd[1450]: time="2024-09-04T18:08:04.664940419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2s5t2,Uid:76e696c3-a720-41d6-8ca9-9d1ae11e1d27,Namespace:kube-system,Attempt:0,} returns sandbox id \"3b6100da21cb23222ea0385cfa2ca04f5e3840c6503c5fdc1a2a2620cca5962e\"" Sep 4 18:08:04.671334 containerd[1450]: time="2024-09-04T18:08:04.671269222Z" level=info msg="CreateContainer within sandbox \"3b6100da21cb23222ea0385cfa2ca04f5e3840c6503c5fdc1a2a2620cca5962e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 4 18:08:04.704508 containerd[1450]: time="2024-09-04T18:08:04.704438073Z" level=info msg="CreateContainer within sandbox \"3b6100da21cb23222ea0385cfa2ca04f5e3840c6503c5fdc1a2a2620cca5962e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3723e671a94c3207b4b6b776c44ccf49549ee55ab589c6936a656208b84c49b7\"" Sep 4 18:08:04.705439 containerd[1450]: time="2024-09-04T18:08:04.705347730Z" level=info msg="StartContainer for \"3723e671a94c3207b4b6b776c44ccf49549ee55ab589c6936a656208b84c49b7\"" Sep 4 18:08:04.738825 systemd[1]: Started cri-containerd-3723e671a94c3207b4b6b776c44ccf49549ee55ab589c6936a656208b84c49b7.scope - libcontainer container 3723e671a94c3207b4b6b776c44ccf49549ee55ab589c6936a656208b84c49b7. Sep 4 18:08:04.783771 containerd[1450]: time="2024-09-04T18:08:04.783483856Z" level=info msg="StartContainer for \"3723e671a94c3207b4b6b776c44ccf49549ee55ab589c6936a656208b84c49b7\" returns successfully" Sep 4 18:08:04.829863 kubelet[2664]: I0904 18:08:04.829799 2664 topology_manager.go:215] "Topology Admit Handler" podUID="212baea3-18f7-4c76-9f64-ea62984aab08" podNamespace="tigera-operator" podName="tigera-operator-5d56685c77-zb6cr" Sep 4 18:08:04.839597 systemd[1]: Created slice kubepods-besteffort-pod212baea3_18f7_4c76_9f64_ea62984aab08.slice - libcontainer container kubepods-besteffort-pod212baea3_18f7_4c76_9f64_ea62984aab08.slice. Sep 4 18:08:04.993083 kubelet[2664]: I0904 18:08:04.993024 2664 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/212baea3-18f7-4c76-9f64-ea62984aab08-var-lib-calico\") pod \"tigera-operator-5d56685c77-zb6cr\" (UID: \"212baea3-18f7-4c76-9f64-ea62984aab08\") " pod="tigera-operator/tigera-operator-5d56685c77-zb6cr" Sep 4 18:08:04.993083 kubelet[2664]: I0904 18:08:04.993100 2664 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bswb\" (UniqueName: \"kubernetes.io/projected/212baea3-18f7-4c76-9f64-ea62984aab08-kube-api-access-5bswb\") pod \"tigera-operator-5d56685c77-zb6cr\" (UID: \"212baea3-18f7-4c76-9f64-ea62984aab08\") " pod="tigera-operator/tigera-operator-5d56685c77-zb6cr" Sep 4 18:08:05.145857 containerd[1450]: time="2024-09-04T18:08:05.144935112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-zb6cr,Uid:212baea3-18f7-4c76-9f64-ea62984aab08,Namespace:tigera-operator,Attempt:0,}" Sep 4 18:08:05.202280 containerd[1450]: time="2024-09-04T18:08:05.201990155Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 18:08:05.202446 containerd[1450]: time="2024-09-04T18:08:05.202273115Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 18:08:05.202446 containerd[1450]: time="2024-09-04T18:08:05.202294836Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 18:08:05.203058 containerd[1450]: time="2024-09-04T18:08:05.202976565Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 18:08:05.227949 systemd[1]: Started cri-containerd-3b86db30ca56dbd2846e497f36effb5a7faa7b9d7bf43ef2e6146a46e90df30b.scope - libcontainer container 3b86db30ca56dbd2846e497f36effb5a7faa7b9d7bf43ef2e6146a46e90df30b. Sep 4 18:08:05.319807 containerd[1450]: time="2024-09-04T18:08:05.319268203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-zb6cr,Uid:212baea3-18f7-4c76-9f64-ea62984aab08,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"3b86db30ca56dbd2846e497f36effb5a7faa7b9d7bf43ef2e6146a46e90df30b\"" Sep 4 18:08:05.324806 containerd[1450]: time="2024-09-04T18:08:05.324683994Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\"" Sep 4 18:08:05.421235 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3829555773.mount: Deactivated successfully. Sep 4 18:08:05.645865 kubelet[2664]: I0904 18:08:05.645158 2664 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-2s5t2" podStartSLOduration=1.6451086209999999 podStartE2EDuration="1.645108621s" podCreationTimestamp="2024-09-04 18:08:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 18:08:05.645024333 +0000 UTC m=+13.380809189" watchObservedRunningTime="2024-09-04 18:08:05.645108621 +0000 UTC m=+13.380893477" Sep 4 18:08:07.035323 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3474309905.mount: Deactivated successfully. Sep 4 18:08:08.334768 containerd[1450]: time="2024-09-04T18:08:08.334376220Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 18:08:08.336162 containerd[1450]: time="2024-09-04T18:08:08.335973276Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.3: active requests=0, bytes read=22136565" Sep 4 18:08:08.336162 containerd[1450]: time="2024-09-04T18:08:08.336112537Z" level=info msg="ImageCreate event name:\"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 18:08:08.338962 containerd[1450]: time="2024-09-04T18:08:08.338914855Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 18:08:08.340066 containerd[1450]: time="2024-09-04T18:08:08.339904942Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.3\" with image id \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\", repo tag \"quay.io/tigera/operator:v1.34.3\", repo digest \"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\", size \"22130728\" in 3.015091465s" Sep 4 18:08:08.340066 containerd[1450]: time="2024-09-04T18:08:08.339939056Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\" returns image reference \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\"" Sep 4 18:08:08.342946 containerd[1450]: time="2024-09-04T18:08:08.342903187Z" level=info msg="CreateContainer within sandbox \"3b86db30ca56dbd2846e497f36effb5a7faa7b9d7bf43ef2e6146a46e90df30b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 4 18:08:08.368464 containerd[1450]: time="2024-09-04T18:08:08.368348152Z" level=info msg="CreateContainer within sandbox \"3b86db30ca56dbd2846e497f36effb5a7faa7b9d7bf43ef2e6146a46e90df30b\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"4ab6e39ffc1a98083ca11debd4a60ef76f1d22d068acab1c7774ed51ec901d42\"" Sep 4 18:08:08.370012 containerd[1450]: time="2024-09-04T18:08:08.369384977Z" level=info msg="StartContainer for \"4ab6e39ffc1a98083ca11debd4a60ef76f1d22d068acab1c7774ed51ec901d42\"" Sep 4 18:08:08.405874 systemd[1]: Started cri-containerd-4ab6e39ffc1a98083ca11debd4a60ef76f1d22d068acab1c7774ed51ec901d42.scope - libcontainer container 4ab6e39ffc1a98083ca11debd4a60ef76f1d22d068acab1c7774ed51ec901d42. Sep 4 18:08:08.445434 containerd[1450]: time="2024-09-04T18:08:08.445390975Z" level=info msg="StartContainer for \"4ab6e39ffc1a98083ca11debd4a60ef76f1d22d068acab1c7774ed51ec901d42\" returns successfully" Sep 4 18:08:11.667151 kubelet[2664]: I0904 18:08:11.667094 2664 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5d56685c77-zb6cr" podStartSLOduration=4.648407993 podStartE2EDuration="7.666886232s" podCreationTimestamp="2024-09-04 18:08:04 +0000 UTC" firstStartedPulling="2024-09-04 18:08:05.321922473 +0000 UTC m=+13.057707319" lastFinishedPulling="2024-09-04 18:08:08.340400702 +0000 UTC m=+16.076185558" observedRunningTime="2024-09-04 18:08:08.66523707 +0000 UTC m=+16.401021966" watchObservedRunningTime="2024-09-04 18:08:11.666886232 +0000 UTC m=+19.402671088" Sep 4 18:08:11.668495 kubelet[2664]: I0904 18:08:11.667928 2664 topology_manager.go:215] "Topology Admit Handler" podUID="98fecc0e-e415-4353-9138-4f174ad352a6" podNamespace="calico-system" podName="calico-typha-69c86786c6-6tr7g" Sep 4 18:08:11.687140 systemd[1]: Created slice kubepods-besteffort-pod98fecc0e_e415_4353_9138_4f174ad352a6.slice - libcontainer container kubepods-besteffort-pod98fecc0e_e415_4353_9138_4f174ad352a6.slice. Sep 4 18:08:11.742040 kubelet[2664]: I0904 18:08:11.741831 2664 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwgnn\" (UniqueName: \"kubernetes.io/projected/98fecc0e-e415-4353-9138-4f174ad352a6-kube-api-access-lwgnn\") pod \"calico-typha-69c86786c6-6tr7g\" (UID: \"98fecc0e-e415-4353-9138-4f174ad352a6\") " pod="calico-system/calico-typha-69c86786c6-6tr7g" Sep 4 18:08:11.742040 kubelet[2664]: I0904 18:08:11.741893 2664 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/98fecc0e-e415-4353-9138-4f174ad352a6-typha-certs\") pod \"calico-typha-69c86786c6-6tr7g\" (UID: \"98fecc0e-e415-4353-9138-4f174ad352a6\") " pod="calico-system/calico-typha-69c86786c6-6tr7g" Sep 4 18:08:11.742040 kubelet[2664]: I0904 18:08:11.741923 2664 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/98fecc0e-e415-4353-9138-4f174ad352a6-tigera-ca-bundle\") pod \"calico-typha-69c86786c6-6tr7g\" (UID: \"98fecc0e-e415-4353-9138-4f174ad352a6\") " pod="calico-system/calico-typha-69c86786c6-6tr7g" Sep 4 18:08:11.805999 kubelet[2664]: I0904 18:08:11.805422 2664 topology_manager.go:215] "Topology Admit Handler" podUID="892927c9-009d-44e6-a726-d841ed277a99" podNamespace="calico-system" podName="calico-node-xccrm" Sep 4 18:08:11.828474 systemd[1]: Created slice kubepods-besteffort-pod892927c9_009d_44e6_a726_d841ed277a99.slice - libcontainer container kubepods-besteffort-pod892927c9_009d_44e6_a726_d841ed277a99.slice. Sep 4 18:08:11.944798 kubelet[2664]: I0904 18:08:11.942117 2664 topology_manager.go:215] "Topology Admit Handler" podUID="cd6b6d64-badf-4e22-9ca4-6086c67f1ef2" podNamespace="calico-system" podName="csi-node-driver-7r577" Sep 4 18:08:11.944798 kubelet[2664]: E0904 18:08:11.942437 2664 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7r577" podUID="cd6b6d64-badf-4e22-9ca4-6086c67f1ef2" Sep 4 18:08:11.945373 kubelet[2664]: I0904 18:08:11.945324 2664 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/892927c9-009d-44e6-a726-d841ed277a99-cni-log-dir\") pod \"calico-node-xccrm\" (UID: \"892927c9-009d-44e6-a726-d841ed277a99\") " pod="calico-system/calico-node-xccrm" Sep 4 18:08:11.945528 kubelet[2664]: I0904 18:08:11.945450 2664 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/892927c9-009d-44e6-a726-d841ed277a99-var-run-calico\") pod \"calico-node-xccrm\" (UID: \"892927c9-009d-44e6-a726-d841ed277a99\") " pod="calico-system/calico-node-xccrm" Sep 4 18:08:11.945756 kubelet[2664]: I0904 18:08:11.945714 2664 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/892927c9-009d-44e6-a726-d841ed277a99-tigera-ca-bundle\") pod \"calico-node-xccrm\" (UID: \"892927c9-009d-44e6-a726-d841ed277a99\") " pod="calico-system/calico-node-xccrm" Sep 4 18:08:11.945836 kubelet[2664]: I0904 18:08:11.945818 2664 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/892927c9-009d-44e6-a726-d841ed277a99-cni-net-dir\") pod \"calico-node-xccrm\" (UID: \"892927c9-009d-44e6-a726-d841ed277a99\") " pod="calico-system/calico-node-xccrm" Sep 4 18:08:11.946019 kubelet[2664]: I0904 18:08:11.945884 2664 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/892927c9-009d-44e6-a726-d841ed277a99-xtables-lock\") pod \"calico-node-xccrm\" (UID: \"892927c9-009d-44e6-a726-d841ed277a99\") " pod="calico-system/calico-node-xccrm" Sep 4 18:08:11.946019 kubelet[2664]: I0904 18:08:11.945957 2664 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4skmr\" (UniqueName: \"kubernetes.io/projected/892927c9-009d-44e6-a726-d841ed277a99-kube-api-access-4skmr\") pod \"calico-node-xccrm\" (UID: \"892927c9-009d-44e6-a726-d841ed277a99\") " pod="calico-system/calico-node-xccrm" Sep 4 18:08:11.946120 kubelet[2664]: I0904 18:08:11.946087 2664 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/892927c9-009d-44e6-a726-d841ed277a99-var-lib-calico\") pod \"calico-node-xccrm\" (UID: \"892927c9-009d-44e6-a726-d841ed277a99\") " pod="calico-system/calico-node-xccrm" Sep 4 18:08:11.946269 kubelet[2664]: I0904 18:08:11.946167 2664 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/892927c9-009d-44e6-a726-d841ed277a99-policysync\") pod \"calico-node-xccrm\" (UID: \"892927c9-009d-44e6-a726-d841ed277a99\") " pod="calico-system/calico-node-xccrm" Sep 4 18:08:11.946269 kubelet[2664]: I0904 18:08:11.946253 2664 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/892927c9-009d-44e6-a726-d841ed277a99-node-certs\") pod \"calico-node-xccrm\" (UID: \"892927c9-009d-44e6-a726-d841ed277a99\") " pod="calico-system/calico-node-xccrm" Sep 4 18:08:11.946408 kubelet[2664]: I0904 18:08:11.946374 2664 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/892927c9-009d-44e6-a726-d841ed277a99-flexvol-driver-host\") pod \"calico-node-xccrm\" (UID: \"892927c9-009d-44e6-a726-d841ed277a99\") " pod="calico-system/calico-node-xccrm" Sep 4 18:08:11.946519 kubelet[2664]: I0904 18:08:11.946449 2664 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/892927c9-009d-44e6-a726-d841ed277a99-lib-modules\") pod \"calico-node-xccrm\" (UID: \"892927c9-009d-44e6-a726-d841ed277a99\") " pod="calico-system/calico-node-xccrm" Sep 4 18:08:11.946519 kubelet[2664]: I0904 18:08:11.946511 2664 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/892927c9-009d-44e6-a726-d841ed277a99-cni-bin-dir\") pod \"calico-node-xccrm\" (UID: \"892927c9-009d-44e6-a726-d841ed277a99\") " pod="calico-system/calico-node-xccrm" Sep 4 18:08:11.995134 containerd[1450]: time="2024-09-04T18:08:11.995058288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-69c86786c6-6tr7g,Uid:98fecc0e-e415-4353-9138-4f174ad352a6,Namespace:calico-system,Attempt:0,}" Sep 4 18:08:12.048074 kubelet[2664]: I0904 18:08:12.047172 2664 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/cd6b6d64-badf-4e22-9ca4-6086c67f1ef2-varrun\") pod \"csi-node-driver-7r577\" (UID: \"cd6b6d64-badf-4e22-9ca4-6086c67f1ef2\") " pod="calico-system/csi-node-driver-7r577" Sep 4 18:08:12.048074 kubelet[2664]: I0904 18:08:12.047302 2664 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/cd6b6d64-badf-4e22-9ca4-6086c67f1ef2-socket-dir\") pod \"csi-node-driver-7r577\" (UID: \"cd6b6d64-badf-4e22-9ca4-6086c67f1ef2\") " pod="calico-system/csi-node-driver-7r577" Sep 4 18:08:12.048074 kubelet[2664]: I0904 18:08:12.047481 2664 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dghds\" (UniqueName: \"kubernetes.io/projected/cd6b6d64-badf-4e22-9ca4-6086c67f1ef2-kube-api-access-dghds\") pod \"csi-node-driver-7r577\" (UID: \"cd6b6d64-badf-4e22-9ca4-6086c67f1ef2\") " pod="calico-system/csi-node-driver-7r577" Sep 4 18:08:12.048074 kubelet[2664]: I0904 18:08:12.047684 2664 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/cd6b6d64-badf-4e22-9ca4-6086c67f1ef2-registration-dir\") pod \"csi-node-driver-7r577\" (UID: \"cd6b6d64-badf-4e22-9ca4-6086c67f1ef2\") " pod="calico-system/csi-node-driver-7r577" Sep 4 18:08:12.052911 kubelet[2664]: I0904 18:08:12.052280 2664 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cd6b6d64-badf-4e22-9ca4-6086c67f1ef2-kubelet-dir\") pod \"csi-node-driver-7r577\" (UID: \"cd6b6d64-badf-4e22-9ca4-6086c67f1ef2\") " pod="calico-system/csi-node-driver-7r577" Sep 4 18:08:12.083629 containerd[1450]: time="2024-09-04T18:08:12.076136613Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 18:08:12.083629 containerd[1450]: time="2024-09-04T18:08:12.082106672Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 18:08:12.083629 containerd[1450]: time="2024-09-04T18:08:12.082131088Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 18:08:12.083629 containerd[1450]: time="2024-09-04T18:08:12.082487727Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 18:08:12.090426 kubelet[2664]: E0904 18:08:12.090399 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 18:08:12.091728 kubelet[2664]: W0904 18:08:12.091261 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 18:08:12.091728 kubelet[2664]: E0904 18:08:12.091299 2664 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 18:08:12.121310 systemd[1]: Started cri-containerd-413a4361de56e5c8e8eb82020456ebd8d08e243054bbbd08846b5c83b05c9cb1.scope - libcontainer container 413a4361de56e5c8e8eb82020456ebd8d08e243054bbbd08846b5c83b05c9cb1. Sep 4 18:08:12.135959 containerd[1450]: time="2024-09-04T18:08:12.135811957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xccrm,Uid:892927c9-009d-44e6-a726-d841ed277a99,Namespace:calico-system,Attempt:0,}" Sep 4 18:08:12.153649 kubelet[2664]: E0904 18:08:12.153579 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 18:08:12.153649 kubelet[2664]: W0904 18:08:12.153635 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 18:08:12.153649 kubelet[2664]: E0904 18:08:12.153690 2664 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 18:08:12.155550 kubelet[2664]: E0904 18:08:12.155487 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 18:08:12.155550 kubelet[2664]: W0904 18:08:12.155510 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 18:08:12.155550 kubelet[2664]: E0904 18:08:12.155540 2664 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 18:08:12.156399 kubelet[2664]: E0904 18:08:12.156334 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 18:08:12.156399 kubelet[2664]: W0904 18:08:12.156351 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 18:08:12.156946 kubelet[2664]: E0904 18:08:12.156756 2664 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 18:08:12.157281 kubelet[2664]: E0904 18:08:12.157230 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 18:08:12.157281 kubelet[2664]: W0904 18:08:12.157247 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 18:08:12.158150 kubelet[2664]: E0904 18:08:12.157535 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 18:08:12.158150 kubelet[2664]: W0904 18:08:12.157549 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 18:08:12.158150 kubelet[2664]: E0904 18:08:12.157883 2664 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 18:08:12.158150 kubelet[2664]: E0904 18:08:12.157941 2664 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 18:08:12.159085 kubelet[2664]: E0904 18:08:12.158490 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 18:08:12.159085 kubelet[2664]: W0904 18:08:12.158564 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 18:08:12.159085 kubelet[2664]: E0904 18:08:12.158595 2664 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 18:08:12.159464 kubelet[2664]: E0904 18:08:12.159339 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 18:08:12.159464 kubelet[2664]: W0904 18:08:12.159416 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 18:08:12.160260 kubelet[2664]: E0904 18:08:12.160176 2664 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 18:08:12.160770 kubelet[2664]: E0904 18:08:12.160469 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 18:08:12.160770 kubelet[2664]: W0904 18:08:12.160482 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 18:08:12.160770 kubelet[2664]: E0904 18:08:12.160506 2664 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 18:08:12.161112 kubelet[2664]: E0904 18:08:12.161101 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 18:08:12.161280 kubelet[2664]: W0904 18:08:12.161152 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 18:08:12.161280 kubelet[2664]: E0904 18:08:12.161205 2664 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 18:08:12.162045 kubelet[2664]: E0904 18:08:12.161926 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 18:08:12.162045 kubelet[2664]: W0904 18:08:12.161941 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 18:08:12.162418 kubelet[2664]: E0904 18:08:12.162065 2664 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 18:08:12.162997 kubelet[2664]: E0904 18:08:12.162828 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 18:08:12.162997 kubelet[2664]: W0904 18:08:12.162841 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 18:08:12.162997 kubelet[2664]: E0904 18:08:12.162897 2664 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 18:08:12.165785 kubelet[2664]: E0904 18:08:12.163334 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 18:08:12.165785 kubelet[2664]: W0904 18:08:12.164523 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 18:08:12.165785 kubelet[2664]: E0904 18:08:12.164691 2664 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 18:08:12.166218 kubelet[2664]: E0904 18:08:12.166012 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 18:08:12.166218 kubelet[2664]: W0904 18:08:12.166024 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 18:08:12.166218 kubelet[2664]: E0904 18:08:12.166082 2664 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 18:08:12.166460 kubelet[2664]: E0904 18:08:12.166449 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 18:08:12.166615 kubelet[2664]: W0904 18:08:12.166535 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 18:08:12.166615 kubelet[2664]: E0904 18:08:12.166597 2664 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 18:08:12.168122 kubelet[2664]: E0904 18:08:12.167978 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 18:08:12.168122 kubelet[2664]: W0904 18:08:12.168012 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 18:08:12.168122 kubelet[2664]: E0904 18:08:12.168078 2664 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 18:08:12.169800 kubelet[2664]: E0904 18:08:12.168535 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 18:08:12.169800 kubelet[2664]: W0904 18:08:12.168548 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 18:08:12.170046 kubelet[2664]: E0904 18:08:12.170006 2664 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 18:08:12.171500 kubelet[2664]: E0904 18:08:12.171476 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 18:08:12.171614 kubelet[2664]: W0904 18:08:12.171599 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 18:08:12.173090 kubelet[2664]: E0904 18:08:12.171790 2664 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 18:08:12.173884 kubelet[2664]: E0904 18:08:12.173864 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 18:08:12.173973 kubelet[2664]: W0904 18:08:12.173959 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 18:08:12.175893 kubelet[2664]: E0904 18:08:12.174073 2664 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 18:08:12.176173 kubelet[2664]: E0904 18:08:12.176159 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 18:08:12.178861 kubelet[2664]: W0904 18:08:12.178695 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 18:08:12.178861 kubelet[2664]: E0904 18:08:12.178792 2664 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 18:08:12.179048 kubelet[2664]: E0904 18:08:12.179035 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 18:08:12.179116 kubelet[2664]: W0904 18:08:12.179104 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 18:08:12.179277 kubelet[2664]: E0904 18:08:12.179263 2664 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 18:08:12.181123 kubelet[2664]: E0904 18:08:12.180879 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 18:08:12.181123 kubelet[2664]: W0904 18:08:12.180892 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 18:08:12.181123 kubelet[2664]: E0904 18:08:12.181063 2664 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 18:08:12.181309 kubelet[2664]: E0904 18:08:12.181297 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 18:08:12.181374 kubelet[2664]: W0904 18:08:12.181363 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 18:08:12.181535 kubelet[2664]: E0904 18:08:12.181466 2664 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 18:08:12.182172 kubelet[2664]: E0904 18:08:12.181794 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 18:08:12.182172 kubelet[2664]: W0904 18:08:12.181808 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 18:08:12.182172 kubelet[2664]: E0904 18:08:12.181844 2664 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 18:08:12.182172 kubelet[2664]: E0904 18:08:12.182092 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 18:08:12.182172 kubelet[2664]: W0904 18:08:12.182104 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 18:08:12.182172 kubelet[2664]: E0904 18:08:12.182129 2664 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 18:08:12.182982 kubelet[2664]: E0904 18:08:12.182947 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 18:08:12.183156 kubelet[2664]: W0904 18:08:12.183125 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 18:08:12.183218 kubelet[2664]: E0904 18:08:12.183157 2664 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 18:08:12.203744 containerd[1450]: time="2024-09-04T18:08:12.203263560Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 18:08:12.203744 containerd[1450]: time="2024-09-04T18:08:12.203354260Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 18:08:12.203744 containerd[1450]: time="2024-09-04T18:08:12.203397551Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 18:08:12.203744 containerd[1450]: time="2024-09-04T18:08:12.203512446Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 18:08:12.206044 kubelet[2664]: E0904 18:08:12.205826 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 18:08:12.206044 kubelet[2664]: W0904 18:08:12.205850 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 18:08:12.206044 kubelet[2664]: E0904 18:08:12.205877 2664 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 18:08:12.241124 systemd[1]: Started cri-containerd-ae23639b8808a05e6b75b18adb0b97a889dd3f4d640ae43387800929b5896d36.scope - libcontainer container ae23639b8808a05e6b75b18adb0b97a889dd3f4d640ae43387800929b5896d36. Sep 4 18:08:12.251229 containerd[1450]: time="2024-09-04T18:08:12.250323910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-69c86786c6-6tr7g,Uid:98fecc0e-e415-4353-9138-4f174ad352a6,Namespace:calico-system,Attempt:0,} returns sandbox id \"413a4361de56e5c8e8eb82020456ebd8d08e243054bbbd08846b5c83b05c9cb1\"" Sep 4 18:08:12.255877 containerd[1450]: time="2024-09-04T18:08:12.255563219Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\"" Sep 4 18:08:12.295696 containerd[1450]: time="2024-09-04T18:08:12.295528139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xccrm,Uid:892927c9-009d-44e6-a726-d841ed277a99,Namespace:calico-system,Attempt:0,} returns sandbox id \"ae23639b8808a05e6b75b18adb0b97a889dd3f4d640ae43387800929b5896d36\"" Sep 4 18:08:13.553093 kubelet[2664]: E0904 18:08:13.551116 2664 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7r577" podUID="cd6b6d64-badf-4e22-9ca4-6086c67f1ef2" Sep 4 18:08:15.499491 containerd[1450]: time="2024-09-04T18:08:15.497106394Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 18:08:15.500414 containerd[1450]: time="2024-09-04T18:08:15.500331884Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.1: active requests=0, bytes read=29471335" Sep 4 18:08:15.500693 containerd[1450]: time="2024-09-04T18:08:15.500628400Z" level=info msg="ImageCreate event name:\"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 18:08:15.504280 containerd[1450]: time="2024-09-04T18:08:15.504202094Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 18:08:15.505274 containerd[1450]: time="2024-09-04T18:08:15.505229161Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.1\" with image id \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\", size \"30963728\" in 3.249622891s" Sep 4 18:08:15.505357 containerd[1450]: time="2024-09-04T18:08:15.505277101Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\" returns image reference \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\"" Sep 4 18:08:15.515708 containerd[1450]: time="2024-09-04T18:08:15.515359443Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\"" Sep 4 18:08:15.565360 kubelet[2664]: E0904 18:08:15.564824 2664 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7r577" podUID="cd6b6d64-badf-4e22-9ca4-6086c67f1ef2" Sep 4 18:08:15.583241 containerd[1450]: time="2024-09-04T18:08:15.583008751Z" level=info msg="CreateContainer within sandbox \"413a4361de56e5c8e8eb82020456ebd8d08e243054bbbd08846b5c83b05c9cb1\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 4 18:08:15.619363 containerd[1450]: time="2024-09-04T18:08:15.619137683Z" level=info msg="CreateContainer within sandbox \"413a4361de56e5c8e8eb82020456ebd8d08e243054bbbd08846b5c83b05c9cb1\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"78fcf1eee647ac7fbc60791b94ae6c7754d8528cd36795ac80586aa38c2a5277\"" Sep 4 18:08:15.620946 containerd[1450]: time="2024-09-04T18:08:15.620843383Z" level=info msg="StartContainer for \"78fcf1eee647ac7fbc60791b94ae6c7754d8528cd36795ac80586aa38c2a5277\"" Sep 4 18:08:15.672421 systemd[1]: Started cri-containerd-78fcf1eee647ac7fbc60791b94ae6c7754d8528cd36795ac80586aa38c2a5277.scope - libcontainer container 78fcf1eee647ac7fbc60791b94ae6c7754d8528cd36795ac80586aa38c2a5277. Sep 4 18:08:15.776911 containerd[1450]: time="2024-09-04T18:08:15.775433037Z" level=info msg="StartContainer for \"78fcf1eee647ac7fbc60791b94ae6c7754d8528cd36795ac80586aa38c2a5277\" returns successfully" Sep 4 18:08:16.797575 kubelet[2664]: E0904 18:08:16.797543 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 18:08:16.798287 kubelet[2664]: W0904 18:08:16.798123 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 18:08:16.798287 kubelet[2664]: E0904 18:08:16.798162 2664 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 18:08:16.798628 kubelet[2664]: E0904 18:08:16.798447 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 18:08:16.798628 kubelet[2664]: W0904 18:08:16.798459 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 18:08:16.798628 kubelet[2664]: E0904 18:08:16.798487 2664 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 18:08:16.798857 kubelet[2664]: E0904 18:08:16.798845 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 18:08:16.799053 kubelet[2664]: W0904 18:08:16.798907 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 18:08:16.799053 kubelet[2664]: E0904 18:08:16.798933 2664 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 18:08:16.800132 kubelet[2664]: E0904 18:08:16.800004 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 18:08:16.800132 kubelet[2664]: W0904 18:08:16.800020 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 18:08:16.800132 kubelet[2664]: E0904 18:08:16.800036 2664 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 18:08:16.800342 kubelet[2664]: E0904 18:08:16.800329 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 18:08:16.800410 kubelet[2664]: W0904 18:08:16.800399 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 18:08:16.800536 kubelet[2664]: E0904 18:08:16.800518 2664 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 18:08:16.800973 kubelet[2664]: E0904 18:08:16.800852 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 18:08:16.800973 kubelet[2664]: W0904 18:08:16.800865 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 18:08:16.800973 kubelet[2664]: E0904 18:08:16.800879 2664 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 18:08:16.801296 kubelet[2664]: E0904 18:08:16.801188 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 18:08:16.801296 kubelet[2664]: W0904 18:08:16.801200 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 18:08:16.801296 kubelet[2664]: E0904 18:08:16.801215 2664 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 18:08:16.802015 kubelet[2664]: E0904 18:08:16.801873 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 18:08:16.802015 kubelet[2664]: W0904 18:08:16.801887 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 18:08:16.802015 kubelet[2664]: E0904 18:08:16.801901 2664 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 18:08:16.802230 kubelet[2664]: E0904 18:08:16.802217 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 18:08:16.802305 kubelet[2664]: W0904 18:08:16.802293 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 18:08:16.802369 kubelet[2664]: E0904 18:08:16.802361 2664 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 18:08:16.802666 kubelet[2664]: E0904 18:08:16.802636 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 18:08:16.802763 kubelet[2664]: W0904 18:08:16.802751 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 18:08:16.802910 kubelet[2664]: E0904 18:08:16.802819 2664 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 18:08:16.803222 kubelet[2664]: E0904 18:08:16.803029 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 18:08:16.803222 kubelet[2664]: W0904 18:08:16.803041 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 18:08:16.803222 kubelet[2664]: E0904 18:08:16.803059 2664 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 18:08:16.803593 kubelet[2664]: E0904 18:08:16.803578 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 18:08:16.803896 kubelet[2664]: W0904 18:08:16.803731 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 18:08:16.803896 kubelet[2664]: E0904 18:08:16.803754 2664 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 18:08:16.804109 kubelet[2664]: E0904 18:08:16.804096 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 18:08:16.804600 kubelet[2664]: W0904 18:08:16.804460 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 18:08:16.804600 kubelet[2664]: E0904 18:08:16.804484 2664 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 18:08:16.804783 kubelet[2664]: E0904 18:08:16.804770 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 18:08:16.804934 kubelet[2664]: W0904 18:08:16.804921 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 18:08:16.805006 kubelet[2664]: E0904 18:08:16.804997 2664 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 18:08:16.805308 kubelet[2664]: E0904 18:08:16.805288 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 18:08:16.805635 kubelet[2664]: W0904 18:08:16.805619 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 18:08:16.805865 kubelet[2664]: E0904 18:08:16.805781 2664 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 18:08:16.899882 kubelet[2664]: E0904 18:08:16.899502 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 18:08:16.899882 kubelet[2664]: W0904 18:08:16.899537 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 18:08:16.899882 kubelet[2664]: E0904 18:08:16.899572 2664 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 18:08:16.900283 kubelet[2664]: E0904 18:08:16.900253 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 18:08:16.900283 kubelet[2664]: W0904 18:08:16.900270 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 18:08:16.900366 kubelet[2664]: E0904 18:08:16.900301 2664 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 18:08:16.901013 kubelet[2664]: E0904 18:08:16.900646 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 18:08:16.901013 kubelet[2664]: W0904 18:08:16.900687 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 18:08:16.901013 kubelet[2664]: E0904 18:08:16.900702 2664 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 18:08:16.901476 kubelet[2664]: E0904 18:08:16.901296 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 18:08:16.901476 kubelet[2664]: W0904 18:08:16.901313 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 18:08:16.901476 kubelet[2664]: E0904 18:08:16.901344 2664 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 18:08:16.901830 kubelet[2664]: E0904 18:08:16.901776 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 18:08:16.901830 kubelet[2664]: W0904 18:08:16.901786 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 18:08:16.901830 kubelet[2664]: E0904 18:08:16.901807 2664 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 18:08:16.902134 kubelet[2664]: E0904 18:08:16.902113 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 18:08:16.902134 kubelet[2664]: W0904 18:08:16.902130 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 18:08:16.902378 kubelet[2664]: E0904 18:08:16.902154 2664 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 18:08:16.902444 kubelet[2664]: E0904 18:08:16.902428 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 18:08:16.902444 kubelet[2664]: W0904 18:08:16.902440 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 18:08:16.902513 kubelet[2664]: E0904 18:08:16.902455 2664 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 18:08:16.902766 kubelet[2664]: E0904 18:08:16.902747 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 18:08:16.902766 kubelet[2664]: W0904 18:08:16.902761 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 18:08:16.903011 kubelet[2664]: E0904 18:08:16.902829 2664 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 18:08:16.903210 kubelet[2664]: E0904 18:08:16.903189 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 18:08:16.903210 kubelet[2664]: W0904 18:08:16.903205 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 18:08:16.903517 kubelet[2664]: E0904 18:08:16.903489 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 18:08:16.903517 kubelet[2664]: W0904 18:08:16.903506 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 18:08:16.903517 kubelet[2664]: E0904 18:08:16.903519 2664 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 18:08:16.904983 kubelet[2664]: E0904 18:08:16.904949 2664 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 18:08:16.905222 kubelet[2664]: E0904 18:08:16.905204 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 18:08:16.905222 kubelet[2664]: W0904 18:08:16.905220 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 18:08:16.905306 kubelet[2664]: E0904 18:08:16.905240 2664 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 18:08:16.905989 kubelet[2664]: E0904 18:08:16.905769 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 18:08:16.905989 kubelet[2664]: W0904 18:08:16.905781 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 18:08:16.905989 kubelet[2664]: E0904 18:08:16.905795 2664 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 18:08:16.906745 kubelet[2664]: E0904 18:08:16.906228 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 18:08:16.906745 kubelet[2664]: W0904 18:08:16.906239 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 18:08:16.906745 kubelet[2664]: E0904 18:08:16.906252 2664 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 18:08:16.906745 kubelet[2664]: E0904 18:08:16.906525 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 18:08:16.906745 kubelet[2664]: W0904 18:08:16.906533 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 18:08:16.906745 kubelet[2664]: E0904 18:08:16.906547 2664 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 18:08:16.906920 kubelet[2664]: E0904 18:08:16.906761 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 18:08:16.906920 kubelet[2664]: W0904 18:08:16.906770 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 18:08:16.906920 kubelet[2664]: E0904 18:08:16.906788 2664 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 18:08:16.906997 kubelet[2664]: E0904 18:08:16.906963 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 18:08:16.906997 kubelet[2664]: W0904 18:08:16.906972 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 18:08:16.906997 kubelet[2664]: E0904 18:08:16.906984 2664 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 18:08:16.907726 kubelet[2664]: E0904 18:08:16.907323 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 18:08:16.907726 kubelet[2664]: W0904 18:08:16.907337 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 18:08:16.907726 kubelet[2664]: E0904 18:08:16.907361 2664 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 18:08:16.907726 kubelet[2664]: E0904 18:08:16.907558 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 18:08:16.907726 kubelet[2664]: W0904 18:08:16.907567 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 18:08:16.907726 kubelet[2664]: E0904 18:08:16.907579 2664 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 18:08:17.357357 containerd[1450]: time="2024-09-04T18:08:17.356741893Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 18:08:17.358256 containerd[1450]: time="2024-09-04T18:08:17.358193802Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1: active requests=0, bytes read=5141007" Sep 4 18:08:17.359511 containerd[1450]: time="2024-09-04T18:08:17.359485869Z" level=info msg="ImageCreate event name:\"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 18:08:17.362423 containerd[1450]: time="2024-09-04T18:08:17.362397280Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 18:08:17.363142 containerd[1450]: time="2024-09-04T18:08:17.363113366Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" with image id \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\", size \"6633368\" in 1.847696826s" Sep 4 18:08:17.363243 containerd[1450]: time="2024-09-04T18:08:17.363208815Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" returns image reference \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\"" Sep 4 18:08:17.368104 containerd[1450]: time="2024-09-04T18:08:17.367558989Z" level=info msg="CreateContainer within sandbox \"ae23639b8808a05e6b75b18adb0b97a889dd3f4d640ae43387800929b5896d36\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 4 18:08:17.396061 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount163507731.mount: Deactivated successfully. Sep 4 18:08:17.398062 containerd[1450]: time="2024-09-04T18:08:17.397589956Z" level=info msg="CreateContainer within sandbox \"ae23639b8808a05e6b75b18adb0b97a889dd3f4d640ae43387800929b5896d36\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"77c2e503f8faefef097e5d50dfef6c10bb826748a4e348b0707f0234c5bfa129\"" Sep 4 18:08:17.399430 containerd[1450]: time="2024-09-04T18:08:17.399399667Z" level=info msg="StartContainer for \"77c2e503f8faefef097e5d50dfef6c10bb826748a4e348b0707f0234c5bfa129\"" Sep 4 18:08:17.452116 systemd[1]: Started cri-containerd-77c2e503f8faefef097e5d50dfef6c10bb826748a4e348b0707f0234c5bfa129.scope - libcontainer container 77c2e503f8faefef097e5d50dfef6c10bb826748a4e348b0707f0234c5bfa129. Sep 4 18:08:17.495710 containerd[1450]: time="2024-09-04T18:08:17.495453932Z" level=info msg="StartContainer for \"77c2e503f8faefef097e5d50dfef6c10bb826748a4e348b0707f0234c5bfa129\" returns successfully" Sep 4 18:08:17.518418 systemd[1]: cri-containerd-77c2e503f8faefef097e5d50dfef6c10bb826748a4e348b0707f0234c5bfa129.scope: Deactivated successfully. Sep 4 18:08:17.550672 kubelet[2664]: E0904 18:08:17.550348 2664 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7r577" podUID="cd6b6d64-badf-4e22-9ca4-6086c67f1ef2" Sep 4 18:08:17.578491 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-77c2e503f8faefef097e5d50dfef6c10bb826748a4e348b0707f0234c5bfa129-rootfs.mount: Deactivated successfully. Sep 4 18:08:17.941306 kubelet[2664]: I0904 18:08:17.733462 2664 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 18:08:17.942179 containerd[1450]: time="2024-09-04T18:08:17.737451202Z" level=info msg="StopContainer for \"77c2e503f8faefef097e5d50dfef6c10bb826748a4e348b0707f0234c5bfa129\" with timeout 5 (s)" Sep 4 18:08:17.978435 kubelet[2664]: I0904 18:08:17.977923 2664 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-69c86786c6-6tr7g" podStartSLOduration=3.723357813 podStartE2EDuration="6.977849427s" podCreationTimestamp="2024-09-04 18:08:11 +0000 UTC" firstStartedPulling="2024-09-04 18:08:12.254842537 +0000 UTC m=+19.990627393" lastFinishedPulling="2024-09-04 18:08:15.509334141 +0000 UTC m=+23.245119007" observedRunningTime="2024-09-04 18:08:16.778048868 +0000 UTC m=+24.513833774" watchObservedRunningTime="2024-09-04 18:08:17.977849427 +0000 UTC m=+25.713634283" Sep 4 18:08:18.274620 containerd[1450]: time="2024-09-04T18:08:18.274045429Z" level=info msg="Stop container \"77c2e503f8faefef097e5d50dfef6c10bb826748a4e348b0707f0234c5bfa129\" with signal terminated" Sep 4 18:08:18.275282 containerd[1450]: time="2024-09-04T18:08:18.275149493Z" level=info msg="shim disconnected" id=77c2e503f8faefef097e5d50dfef6c10bb826748a4e348b0707f0234c5bfa129 namespace=k8s.io Sep 4 18:08:18.275830 containerd[1450]: time="2024-09-04T18:08:18.275470767Z" level=warning msg="cleaning up after shim disconnected" id=77c2e503f8faefef097e5d50dfef6c10bb826748a4e348b0707f0234c5bfa129 namespace=k8s.io Sep 4 18:08:18.275830 containerd[1450]: time="2024-09-04T18:08:18.275553212Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 18:08:18.319052 containerd[1450]: time="2024-09-04T18:08:18.318929722Z" level=info msg="StopContainer for \"77c2e503f8faefef097e5d50dfef6c10bb826748a4e348b0707f0234c5bfa129\" returns successfully" Sep 4 18:08:18.321365 containerd[1450]: time="2024-09-04T18:08:18.320899793Z" level=info msg="StopPodSandbox for \"ae23639b8808a05e6b75b18adb0b97a889dd3f4d640ae43387800929b5896d36\"" Sep 4 18:08:18.321365 containerd[1450]: time="2024-09-04T18:08:18.320990103Z" level=info msg="Container to stop \"77c2e503f8faefef097e5d50dfef6c10bb826748a4e348b0707f0234c5bfa129\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 18:08:18.338096 systemd[1]: cri-containerd-ae23639b8808a05e6b75b18adb0b97a889dd3f4d640ae43387800929b5896d36.scope: Deactivated successfully. Sep 4 18:08:18.372726 containerd[1450]: time="2024-09-04T18:08:18.372414855Z" level=info msg="shim disconnected" id=ae23639b8808a05e6b75b18adb0b97a889dd3f4d640ae43387800929b5896d36 namespace=k8s.io Sep 4 18:08:18.372726 containerd[1450]: time="2024-09-04T18:08:18.372484345Z" level=warning msg="cleaning up after shim disconnected" id=ae23639b8808a05e6b75b18adb0b97a889dd3f4d640ae43387800929b5896d36 namespace=k8s.io Sep 4 18:08:18.372726 containerd[1450]: time="2024-09-04T18:08:18.372494995Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 18:08:18.389931 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ae23639b8808a05e6b75b18adb0b97a889dd3f4d640ae43387800929b5896d36-rootfs.mount: Deactivated successfully. Sep 4 18:08:18.390060 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ae23639b8808a05e6b75b18adb0b97a889dd3f4d640ae43387800929b5896d36-shm.mount: Deactivated successfully. Sep 4 18:08:18.392914 containerd[1450]: time="2024-09-04T18:08:18.392869577Z" level=info msg="TearDown network for sandbox \"ae23639b8808a05e6b75b18adb0b97a889dd3f4d640ae43387800929b5896d36\" successfully" Sep 4 18:08:18.392914 containerd[1450]: time="2024-09-04T18:08:18.392912066Z" level=info msg="StopPodSandbox for \"ae23639b8808a05e6b75b18adb0b97a889dd3f4d640ae43387800929b5896d36\" returns successfully" Sep 4 18:08:18.457229 kubelet[2664]: I0904 18:08:18.456367 2664 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4skmr\" (UniqueName: \"kubernetes.io/projected/892927c9-009d-44e6-a726-d841ed277a99-kube-api-access-4skmr\") pod \"892927c9-009d-44e6-a726-d841ed277a99\" (UID: \"892927c9-009d-44e6-a726-d841ed277a99\") " Sep 4 18:08:18.457229 kubelet[2664]: I0904 18:08:18.456467 2664 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/892927c9-009d-44e6-a726-d841ed277a99-var-run-calico\") pod \"892927c9-009d-44e6-a726-d841ed277a99\" (UID: \"892927c9-009d-44e6-a726-d841ed277a99\") " Sep 4 18:08:18.457229 kubelet[2664]: I0904 18:08:18.456527 2664 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/892927c9-009d-44e6-a726-d841ed277a99-tigera-ca-bundle\") pod \"892927c9-009d-44e6-a726-d841ed277a99\" (UID: \"892927c9-009d-44e6-a726-d841ed277a99\") " Sep 4 18:08:18.457229 kubelet[2664]: I0904 18:08:18.456577 2664 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/892927c9-009d-44e6-a726-d841ed277a99-flexvol-driver-host\") pod \"892927c9-009d-44e6-a726-d841ed277a99\" (UID: \"892927c9-009d-44e6-a726-d841ed277a99\") " Sep 4 18:08:18.457229 kubelet[2664]: I0904 18:08:18.456627 2664 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/892927c9-009d-44e6-a726-d841ed277a99-cni-net-dir\") pod \"892927c9-009d-44e6-a726-d841ed277a99\" (UID: \"892927c9-009d-44e6-a726-d841ed277a99\") " Sep 4 18:08:18.457229 kubelet[2664]: I0904 18:08:18.456720 2664 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/892927c9-009d-44e6-a726-d841ed277a99-policysync\") pod \"892927c9-009d-44e6-a726-d841ed277a99\" (UID: \"892927c9-009d-44e6-a726-d841ed277a99\") " Sep 4 18:08:18.457582 kubelet[2664]: I0904 18:08:18.456780 2664 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/892927c9-009d-44e6-a726-d841ed277a99-node-certs\") pod \"892927c9-009d-44e6-a726-d841ed277a99\" (UID: \"892927c9-009d-44e6-a726-d841ed277a99\") " Sep 4 18:08:18.457582 kubelet[2664]: I0904 18:08:18.456832 2664 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/892927c9-009d-44e6-a726-d841ed277a99-var-lib-calico\") pod \"892927c9-009d-44e6-a726-d841ed277a99\" (UID: \"892927c9-009d-44e6-a726-d841ed277a99\") " Sep 4 18:08:18.457582 kubelet[2664]: I0904 18:08:18.456878 2664 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/892927c9-009d-44e6-a726-d841ed277a99-cni-bin-dir\") pod \"892927c9-009d-44e6-a726-d841ed277a99\" (UID: \"892927c9-009d-44e6-a726-d841ed277a99\") " Sep 4 18:08:18.457582 kubelet[2664]: I0904 18:08:18.456931 2664 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/892927c9-009d-44e6-a726-d841ed277a99-cni-log-dir\") pod \"892927c9-009d-44e6-a726-d841ed277a99\" (UID: \"892927c9-009d-44e6-a726-d841ed277a99\") " Sep 4 18:08:18.457582 kubelet[2664]: I0904 18:08:18.456976 2664 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/892927c9-009d-44e6-a726-d841ed277a99-xtables-lock\") pod \"892927c9-009d-44e6-a726-d841ed277a99\" (UID: \"892927c9-009d-44e6-a726-d841ed277a99\") " Sep 4 18:08:18.457582 kubelet[2664]: I0904 18:08:18.457030 2664 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/892927c9-009d-44e6-a726-d841ed277a99-lib-modules\") pod \"892927c9-009d-44e6-a726-d841ed277a99\" (UID: \"892927c9-009d-44e6-a726-d841ed277a99\") " Sep 4 18:08:18.457814 kubelet[2664]: I0904 18:08:18.457143 2664 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/892927c9-009d-44e6-a726-d841ed277a99-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "892927c9-009d-44e6-a726-d841ed277a99" (UID: "892927c9-009d-44e6-a726-d841ed277a99"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 18:08:18.457814 kubelet[2664]: I0904 18:08:18.457240 2664 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/892927c9-009d-44e6-a726-d841ed277a99-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "892927c9-009d-44e6-a726-d841ed277a99" (UID: "892927c9-009d-44e6-a726-d841ed277a99"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 18:08:18.458413 kubelet[2664]: I0904 18:08:18.458021 2664 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/892927c9-009d-44e6-a726-d841ed277a99-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "892927c9-009d-44e6-a726-d841ed277a99" (UID: "892927c9-009d-44e6-a726-d841ed277a99"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 18:08:18.458413 kubelet[2664]: I0904 18:08:18.458094 2664 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/892927c9-009d-44e6-a726-d841ed277a99-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "892927c9-009d-44e6-a726-d841ed277a99" (UID: "892927c9-009d-44e6-a726-d841ed277a99"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 18:08:18.458413 kubelet[2664]: I0904 18:08:18.458121 2664 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/892927c9-009d-44e6-a726-d841ed277a99-policysync" (OuterVolumeSpecName: "policysync") pod "892927c9-009d-44e6-a726-d841ed277a99" (UID: "892927c9-009d-44e6-a726-d841ed277a99"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 18:08:18.464989 kubelet[2664]: I0904 18:08:18.464913 2664 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/892927c9-009d-44e6-a726-d841ed277a99-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "892927c9-009d-44e6-a726-d841ed277a99" (UID: "892927c9-009d-44e6-a726-d841ed277a99"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 4 18:08:18.465141 kubelet[2664]: I0904 18:08:18.465050 2664 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/892927c9-009d-44e6-a726-d841ed277a99-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "892927c9-009d-44e6-a726-d841ed277a99" (UID: "892927c9-009d-44e6-a726-d841ed277a99"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 18:08:18.465176 kubelet[2664]: I0904 18:08:18.465106 2664 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/892927c9-009d-44e6-a726-d841ed277a99-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "892927c9-009d-44e6-a726-d841ed277a99" (UID: "892927c9-009d-44e6-a726-d841ed277a99"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 18:08:18.465486 kubelet[2664]: I0904 18:08:18.465200 2664 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/892927c9-009d-44e6-a726-d841ed277a99-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "892927c9-009d-44e6-a726-d841ed277a99" (UID: "892927c9-009d-44e6-a726-d841ed277a99"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 18:08:18.465486 kubelet[2664]: I0904 18:08:18.465261 2664 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/892927c9-009d-44e6-a726-d841ed277a99-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "892927c9-009d-44e6-a726-d841ed277a99" (UID: "892927c9-009d-44e6-a726-d841ed277a99"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 18:08:18.466858 systemd[1]: var-lib-kubelet-pods-892927c9\x2d009d\x2d44e6\x2da726\x2dd841ed277a99-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4skmr.mount: Deactivated successfully. Sep 4 18:08:18.469647 kubelet[2664]: I0904 18:08:18.469494 2664 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/892927c9-009d-44e6-a726-d841ed277a99-kube-api-access-4skmr" (OuterVolumeSpecName: "kube-api-access-4skmr") pod "892927c9-009d-44e6-a726-d841ed277a99" (UID: "892927c9-009d-44e6-a726-d841ed277a99"). InnerVolumeSpecName "kube-api-access-4skmr". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 4 18:08:18.473569 kubelet[2664]: I0904 18:08:18.472007 2664 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/892927c9-009d-44e6-a726-d841ed277a99-node-certs" (OuterVolumeSpecName: "node-certs") pod "892927c9-009d-44e6-a726-d841ed277a99" (UID: "892927c9-009d-44e6-a726-d841ed277a99"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 4 18:08:18.473360 systemd[1]: var-lib-kubelet-pods-892927c9\x2d009d\x2d44e6\x2da726\x2dd841ed277a99-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Sep 4 18:08:18.557570 kubelet[2664]: I0904 18:08:18.557418 2664 reconciler_common.go:300] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/892927c9-009d-44e6-a726-d841ed277a99-var-lib-calico\") on node \"ci-4054-1-0-c-4d101ae770.novalocal\" DevicePath \"\"" Sep 4 18:08:18.558249 kubelet[2664]: I0904 18:08:18.557897 2664 reconciler_common.go:300] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/892927c9-009d-44e6-a726-d841ed277a99-cni-bin-dir\") on node \"ci-4054-1-0-c-4d101ae770.novalocal\" DevicePath \"\"" Sep 4 18:08:18.558249 kubelet[2664]: I0904 18:08:18.557949 2664 reconciler_common.go:300] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/892927c9-009d-44e6-a726-d841ed277a99-cni-log-dir\") on node \"ci-4054-1-0-c-4d101ae770.novalocal\" DevicePath \"\"" Sep 4 18:08:18.558249 kubelet[2664]: I0904 18:08:18.557985 2664 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/892927c9-009d-44e6-a726-d841ed277a99-xtables-lock\") on node \"ci-4054-1-0-c-4d101ae770.novalocal\" DevicePath \"\"" Sep 4 18:08:18.558249 kubelet[2664]: I0904 18:08:18.558017 2664 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/892927c9-009d-44e6-a726-d841ed277a99-lib-modules\") on node \"ci-4054-1-0-c-4d101ae770.novalocal\" DevicePath \"\"" Sep 4 18:08:18.558249 kubelet[2664]: I0904 18:08:18.558052 2664 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-4skmr\" (UniqueName: \"kubernetes.io/projected/892927c9-009d-44e6-a726-d841ed277a99-kube-api-access-4skmr\") on node \"ci-4054-1-0-c-4d101ae770.novalocal\" DevicePath \"\"" Sep 4 18:08:18.558249 kubelet[2664]: I0904 18:08:18.558080 2664 reconciler_common.go:300] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/892927c9-009d-44e6-a726-d841ed277a99-flexvol-driver-host\") on node \"ci-4054-1-0-c-4d101ae770.novalocal\" DevicePath \"\"" Sep 4 18:08:18.558249 kubelet[2664]: I0904 18:08:18.558108 2664 reconciler_common.go:300] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/892927c9-009d-44e6-a726-d841ed277a99-var-run-calico\") on node \"ci-4054-1-0-c-4d101ae770.novalocal\" DevicePath \"\"" Sep 4 18:08:18.558799 kubelet[2664]: I0904 18:08:18.558138 2664 reconciler_common.go:300] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/892927c9-009d-44e6-a726-d841ed277a99-tigera-ca-bundle\") on node \"ci-4054-1-0-c-4d101ae770.novalocal\" DevicePath \"\"" Sep 4 18:08:18.558799 kubelet[2664]: I0904 18:08:18.558167 2664 reconciler_common.go:300] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/892927c9-009d-44e6-a726-d841ed277a99-cni-net-dir\") on node \"ci-4054-1-0-c-4d101ae770.novalocal\" DevicePath \"\"" Sep 4 18:08:18.558799 kubelet[2664]: I0904 18:08:18.558194 2664 reconciler_common.go:300] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/892927c9-009d-44e6-a726-d841ed277a99-policysync\") on node \"ci-4054-1-0-c-4d101ae770.novalocal\" DevicePath \"\"" Sep 4 18:08:18.558799 kubelet[2664]: I0904 18:08:18.558220 2664 reconciler_common.go:300] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/892927c9-009d-44e6-a726-d841ed277a99-node-certs\") on node \"ci-4054-1-0-c-4d101ae770.novalocal\" DevicePath \"\"" Sep 4 18:08:18.566811 systemd[1]: Removed slice kubepods-besteffort-pod892927c9_009d_44e6_a726_d841ed277a99.slice - libcontainer container kubepods-besteffort-pod892927c9_009d_44e6_a726_d841ed277a99.slice. Sep 4 18:08:18.741096 kubelet[2664]: I0904 18:08:18.738158 2664 scope.go:117] "RemoveContainer" containerID="77c2e503f8faefef097e5d50dfef6c10bb826748a4e348b0707f0234c5bfa129" Sep 4 18:08:18.747052 containerd[1450]: time="2024-09-04T18:08:18.747008996Z" level=info msg="RemoveContainer for \"77c2e503f8faefef097e5d50dfef6c10bb826748a4e348b0707f0234c5bfa129\"" Sep 4 18:08:18.754320 containerd[1450]: time="2024-09-04T18:08:18.754237959Z" level=info msg="RemoveContainer for \"77c2e503f8faefef097e5d50dfef6c10bb826748a4e348b0707f0234c5bfa129\" returns successfully" Sep 4 18:08:18.754862 kubelet[2664]: I0904 18:08:18.754812 2664 scope.go:117] "RemoveContainer" containerID="77c2e503f8faefef097e5d50dfef6c10bb826748a4e348b0707f0234c5bfa129" Sep 4 18:08:18.755316 containerd[1450]: time="2024-09-04T18:08:18.755238969Z" level=error msg="ContainerStatus for \"77c2e503f8faefef097e5d50dfef6c10bb826748a4e348b0707f0234c5bfa129\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"77c2e503f8faefef097e5d50dfef6c10bb826748a4e348b0707f0234c5bfa129\": not found" Sep 4 18:08:18.755574 kubelet[2664]: E0904 18:08:18.755529 2664 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"77c2e503f8faefef097e5d50dfef6c10bb826748a4e348b0707f0234c5bfa129\": not found" containerID="77c2e503f8faefef097e5d50dfef6c10bb826748a4e348b0707f0234c5bfa129" Sep 4 18:08:18.755753 kubelet[2664]: I0904 18:08:18.755648 2664 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"77c2e503f8faefef097e5d50dfef6c10bb826748a4e348b0707f0234c5bfa129"} err="failed to get container status \"77c2e503f8faefef097e5d50dfef6c10bb826748a4e348b0707f0234c5bfa129\": rpc error: code = NotFound desc = an error occurred when try to find container \"77c2e503f8faefef097e5d50dfef6c10bb826748a4e348b0707f0234c5bfa129\": not found" Sep 4 18:08:18.822201 kubelet[2664]: I0904 18:08:18.822053 2664 topology_manager.go:215] "Topology Admit Handler" podUID="7f816260-6b9a-4de3-9985-c06ac718d0d4" podNamespace="calico-system" podName="calico-node-9c6sv" Sep 4 18:08:18.822201 kubelet[2664]: E0904 18:08:18.822190 2664 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="892927c9-009d-44e6-a726-d841ed277a99" containerName="flexvol-driver" Sep 4 18:08:18.822998 kubelet[2664]: I0904 18:08:18.822254 2664 memory_manager.go:354] "RemoveStaleState removing state" podUID="892927c9-009d-44e6-a726-d841ed277a99" containerName="flexvol-driver" Sep 4 18:08:18.835896 systemd[1]: Created slice kubepods-besteffort-pod7f816260_6b9a_4de3_9985_c06ac718d0d4.slice - libcontainer container kubepods-besteffort-pod7f816260_6b9a_4de3_9985_c06ac718d0d4.slice. Sep 4 18:08:18.860560 kubelet[2664]: I0904 18:08:18.859964 2664 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7f816260-6b9a-4de3-9985-c06ac718d0d4-tigera-ca-bundle\") pod \"calico-node-9c6sv\" (UID: \"7f816260-6b9a-4de3-9985-c06ac718d0d4\") " pod="calico-system/calico-node-9c6sv" Sep 4 18:08:18.860560 kubelet[2664]: I0904 18:08:18.860051 2664 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7f816260-6b9a-4de3-9985-c06ac718d0d4-lib-modules\") pod \"calico-node-9c6sv\" (UID: \"7f816260-6b9a-4de3-9985-c06ac718d0d4\") " pod="calico-system/calico-node-9c6sv" Sep 4 18:08:18.860560 kubelet[2664]: I0904 18:08:18.860080 2664 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7f816260-6b9a-4de3-9985-c06ac718d0d4-xtables-lock\") pod \"calico-node-9c6sv\" (UID: \"7f816260-6b9a-4de3-9985-c06ac718d0d4\") " pod="calico-system/calico-node-9c6sv" Sep 4 18:08:18.860560 kubelet[2664]: I0904 18:08:18.860107 2664 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/7f816260-6b9a-4de3-9985-c06ac718d0d4-cni-log-dir\") pod \"calico-node-9c6sv\" (UID: \"7f816260-6b9a-4de3-9985-c06ac718d0d4\") " pod="calico-system/calico-node-9c6sv" Sep 4 18:08:18.860560 kubelet[2664]: I0904 18:08:18.860137 2664 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/7f816260-6b9a-4de3-9985-c06ac718d0d4-node-certs\") pod \"calico-node-9c6sv\" (UID: \"7f816260-6b9a-4de3-9985-c06ac718d0d4\") " pod="calico-system/calico-node-9c6sv" Sep 4 18:08:18.860888 kubelet[2664]: I0904 18:08:18.860164 2664 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7f816260-6b9a-4de3-9985-c06ac718d0d4-var-lib-calico\") pod \"calico-node-9c6sv\" (UID: \"7f816260-6b9a-4de3-9985-c06ac718d0d4\") " pod="calico-system/calico-node-9c6sv" Sep 4 18:08:18.860888 kubelet[2664]: I0904 18:08:18.860192 2664 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/7f816260-6b9a-4de3-9985-c06ac718d0d4-flexvol-driver-host\") pod \"calico-node-9c6sv\" (UID: \"7f816260-6b9a-4de3-9985-c06ac718d0d4\") " pod="calico-system/calico-node-9c6sv" Sep 4 18:08:18.860888 kubelet[2664]: I0904 18:08:18.860223 2664 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjr87\" (UniqueName: \"kubernetes.io/projected/7f816260-6b9a-4de3-9985-c06ac718d0d4-kube-api-access-rjr87\") pod \"calico-node-9c6sv\" (UID: \"7f816260-6b9a-4de3-9985-c06ac718d0d4\") " pod="calico-system/calico-node-9c6sv" Sep 4 18:08:18.860888 kubelet[2664]: I0904 18:08:18.860249 2664 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/7f816260-6b9a-4de3-9985-c06ac718d0d4-policysync\") pod \"calico-node-9c6sv\" (UID: \"7f816260-6b9a-4de3-9985-c06ac718d0d4\") " pod="calico-system/calico-node-9c6sv" Sep 4 18:08:18.860888 kubelet[2664]: I0904 18:08:18.860277 2664 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/7f816260-6b9a-4de3-9985-c06ac718d0d4-cni-bin-dir\") pod \"calico-node-9c6sv\" (UID: \"7f816260-6b9a-4de3-9985-c06ac718d0d4\") " pod="calico-system/calico-node-9c6sv" Sep 4 18:08:18.861039 kubelet[2664]: I0904 18:08:18.860304 2664 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/7f816260-6b9a-4de3-9985-c06ac718d0d4-cni-net-dir\") pod \"calico-node-9c6sv\" (UID: \"7f816260-6b9a-4de3-9985-c06ac718d0d4\") " pod="calico-system/calico-node-9c6sv" Sep 4 18:08:18.861039 kubelet[2664]: I0904 18:08:18.860337 2664 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/7f816260-6b9a-4de3-9985-c06ac718d0d4-var-run-calico\") pod \"calico-node-9c6sv\" (UID: \"7f816260-6b9a-4de3-9985-c06ac718d0d4\") " pod="calico-system/calico-node-9c6sv" Sep 4 18:08:19.146729 containerd[1450]: time="2024-09-04T18:08:19.145945000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9c6sv,Uid:7f816260-6b9a-4de3-9985-c06ac718d0d4,Namespace:calico-system,Attempt:0,}" Sep 4 18:08:19.203172 containerd[1450]: time="2024-09-04T18:08:19.200723628Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 18:08:19.203172 containerd[1450]: time="2024-09-04T18:08:19.202495998Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 18:08:19.203172 containerd[1450]: time="2024-09-04T18:08:19.202538938Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 18:08:19.203172 containerd[1450]: time="2024-09-04T18:08:19.202842699Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 18:08:19.244048 systemd[1]: Started cri-containerd-5f83c452fc5fedba37b459574425833546c28997acec43b92ac6345143eb09f5.scope - libcontainer container 5f83c452fc5fedba37b459574425833546c28997acec43b92ac6345143eb09f5. Sep 4 18:08:19.280605 containerd[1450]: time="2024-09-04T18:08:19.280488670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9c6sv,Uid:7f816260-6b9a-4de3-9985-c06ac718d0d4,Namespace:calico-system,Attempt:0,} returns sandbox id \"5f83c452fc5fedba37b459574425833546c28997acec43b92ac6345143eb09f5\"" Sep 4 18:08:19.284251 containerd[1450]: time="2024-09-04T18:08:19.284116677Z" level=info msg="CreateContainer within sandbox \"5f83c452fc5fedba37b459574425833546c28997acec43b92ac6345143eb09f5\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 4 18:08:19.303163 containerd[1450]: time="2024-09-04T18:08:19.303113238Z" level=info msg="CreateContainer within sandbox \"5f83c452fc5fedba37b459574425833546c28997acec43b92ac6345143eb09f5\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"6baa3652c55bda30091418075b56331dcf645b3fc1c6f9a596967f508a9a26f2\"" Sep 4 18:08:19.304333 containerd[1450]: time="2024-09-04T18:08:19.304251876Z" level=info msg="StartContainer for \"6baa3652c55bda30091418075b56331dcf645b3fc1c6f9a596967f508a9a26f2\"" Sep 4 18:08:19.341824 systemd[1]: Started cri-containerd-6baa3652c55bda30091418075b56331dcf645b3fc1c6f9a596967f508a9a26f2.scope - libcontainer container 6baa3652c55bda30091418075b56331dcf645b3fc1c6f9a596967f508a9a26f2. Sep 4 18:08:19.378890 containerd[1450]: time="2024-09-04T18:08:19.378589071Z" level=info msg="StartContainer for \"6baa3652c55bda30091418075b56331dcf645b3fc1c6f9a596967f508a9a26f2\" returns successfully" Sep 4 18:08:19.405085 systemd[1]: cri-containerd-6baa3652c55bda30091418075b56331dcf645b3fc1c6f9a596967f508a9a26f2.scope: Deactivated successfully. Sep 4 18:08:19.435893 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6baa3652c55bda30091418075b56331dcf645b3fc1c6f9a596967f508a9a26f2-rootfs.mount: Deactivated successfully. Sep 4 18:08:19.445128 containerd[1450]: time="2024-09-04T18:08:19.445060758Z" level=info msg="shim disconnected" id=6baa3652c55bda30091418075b56331dcf645b3fc1c6f9a596967f508a9a26f2 namespace=k8s.io Sep 4 18:08:19.445128 containerd[1450]: time="2024-09-04T18:08:19.445115230Z" level=warning msg="cleaning up after shim disconnected" id=6baa3652c55bda30091418075b56331dcf645b3fc1c6f9a596967f508a9a26f2 namespace=k8s.io Sep 4 18:08:19.445128 containerd[1450]: time="2024-09-04T18:08:19.445126612Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 18:08:19.550419 kubelet[2664]: E0904 18:08:19.550389 2664 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7r577" podUID="cd6b6d64-badf-4e22-9ca4-6086c67f1ef2" Sep 4 18:08:19.751197 containerd[1450]: time="2024-09-04T18:08:19.750491040Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\"" Sep 4 18:08:20.556539 kubelet[2664]: I0904 18:08:20.556422 2664 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="892927c9-009d-44e6-a726-d841ed277a99" path="/var/lib/kubelet/pods/892927c9-009d-44e6-a726-d841ed277a99/volumes" Sep 4 18:08:21.550585 kubelet[2664]: E0904 18:08:21.550529 2664 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7r577" podUID="cd6b6d64-badf-4e22-9ca4-6086c67f1ef2" Sep 4 18:08:23.550265 kubelet[2664]: E0904 18:08:23.549983 2664 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7r577" podUID="cd6b6d64-badf-4e22-9ca4-6086c67f1ef2" Sep 4 18:08:25.386443 containerd[1450]: time="2024-09-04T18:08:25.386238034Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 18:08:25.388094 containerd[1450]: time="2024-09-04T18:08:25.387955820Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.1: active requests=0, bytes read=93083736" Sep 4 18:08:25.389380 containerd[1450]: time="2024-09-04T18:08:25.389267343Z" level=info msg="ImageCreate event name:\"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 18:08:25.392282 containerd[1450]: time="2024-09-04T18:08:25.392206094Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 18:08:25.394968 containerd[1450]: time="2024-09-04T18:08:25.394564974Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.1\" with image id \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\", size \"94576137\" in 5.64401841s" Sep 4 18:08:25.394968 containerd[1450]: time="2024-09-04T18:08:25.394643882Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\" returns image reference \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\"" Sep 4 18:08:25.400557 containerd[1450]: time="2024-09-04T18:08:25.400525961Z" level=info msg="CreateContainer within sandbox \"5f83c452fc5fedba37b459574425833546c28997acec43b92ac6345143eb09f5\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 4 18:08:25.437130 containerd[1450]: time="2024-09-04T18:08:25.437059551Z" level=info msg="CreateContainer within sandbox \"5f83c452fc5fedba37b459574425833546c28997acec43b92ac6345143eb09f5\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e5d4c29d984741c47f5825b3b90343397d7808bcd75f6be527d8687a80acb47a\"" Sep 4 18:08:25.438266 containerd[1450]: time="2024-09-04T18:08:25.438137576Z" level=info msg="StartContainer for \"e5d4c29d984741c47f5825b3b90343397d7808bcd75f6be527d8687a80acb47a\"" Sep 4 18:08:25.551106 kubelet[2664]: E0904 18:08:25.550729 2664 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7r577" podUID="cd6b6d64-badf-4e22-9ca4-6086c67f1ef2" Sep 4 18:08:25.587847 systemd[1]: Started cri-containerd-e5d4c29d984741c47f5825b3b90343397d7808bcd75f6be527d8687a80acb47a.scope - libcontainer container e5d4c29d984741c47f5825b3b90343397d7808bcd75f6be527d8687a80acb47a. Sep 4 18:08:25.627788 containerd[1450]: time="2024-09-04T18:08:25.627629027Z" level=info msg="StartContainer for \"e5d4c29d984741c47f5825b3b90343397d7808bcd75f6be527d8687a80acb47a\" returns successfully" Sep 4 18:08:27.398818 systemd[1]: cri-containerd-e5d4c29d984741c47f5825b3b90343397d7808bcd75f6be527d8687a80acb47a.scope: Deactivated successfully. Sep 4 18:08:27.497268 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e5d4c29d984741c47f5825b3b90343397d7808bcd75f6be527d8687a80acb47a-rootfs.mount: Deactivated successfully. Sep 4 18:08:27.620299 kubelet[2664]: E0904 18:08:27.550452 2664 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7r577" podUID="cd6b6d64-badf-4e22-9ca4-6086c67f1ef2" Sep 4 18:08:27.630891 containerd[1450]: time="2024-09-04T18:08:27.630500366Z" level=info msg="shim disconnected" id=e5d4c29d984741c47f5825b3b90343397d7808bcd75f6be527d8687a80acb47a namespace=k8s.io Sep 4 18:08:27.630891 containerd[1450]: time="2024-09-04T18:08:27.630596597Z" level=warning msg="cleaning up after shim disconnected" id=e5d4c29d984741c47f5825b3b90343397d7808bcd75f6be527d8687a80acb47a namespace=k8s.io Sep 4 18:08:27.630891 containerd[1450]: time="2024-09-04T18:08:27.630613930Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 18:08:27.659751 containerd[1450]: time="2024-09-04T18:08:27.658055608Z" level=warning msg="cleanup warnings time=\"2024-09-04T18:08:27Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 4 18:08:27.711327 kubelet[2664]: I0904 18:08:27.711282 2664 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Sep 4 18:08:27.745039 kubelet[2664]: I0904 18:08:27.744974 2664 topology_manager.go:215] "Topology Admit Handler" podUID="a7bc586f-0539-46c3-a4a7-5297cb2ca3b1" podNamespace="kube-system" podName="coredns-76f75df574-pm852" Sep 4 18:08:27.754304 kubelet[2664]: I0904 18:08:27.753257 2664 topology_manager.go:215] "Topology Admit Handler" podUID="3b130609-1dd2-44b3-90db-48b3db3330ce" podNamespace="kube-system" podName="coredns-76f75df574-jm87r" Sep 4 18:08:27.762917 systemd[1]: Created slice kubepods-burstable-poda7bc586f_0539_46c3_a4a7_5297cb2ca3b1.slice - libcontainer container kubepods-burstable-poda7bc586f_0539_46c3_a4a7_5297cb2ca3b1.slice. Sep 4 18:08:27.778886 kubelet[2664]: I0904 18:08:27.778623 2664 topology_manager.go:215] "Topology Admit Handler" podUID="34cca147-c799-40d7-a541-5d0353aa3f8d" podNamespace="calico-system" podName="calico-kube-controllers-7785d6cbd7-wkdsv" Sep 4 18:08:27.783216 systemd[1]: Created slice kubepods-burstable-pod3b130609_1dd2_44b3_90db_48b3db3330ce.slice - libcontainer container kubepods-burstable-pod3b130609_1dd2_44b3_90db_48b3db3330ce.slice. Sep 4 18:08:27.789562 containerd[1450]: time="2024-09-04T18:08:27.786166951Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\"" Sep 4 18:08:27.809251 systemd[1]: Created slice kubepods-besteffort-pod34cca147_c799_40d7_a541_5d0353aa3f8d.slice - libcontainer container kubepods-besteffort-pod34cca147_c799_40d7_a541_5d0353aa3f8d.slice. Sep 4 18:08:27.931253 kubelet[2664]: I0904 18:08:27.931171 2664 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72cjt\" (UniqueName: \"kubernetes.io/projected/a7bc586f-0539-46c3-a4a7-5297cb2ca3b1-kube-api-access-72cjt\") pod \"coredns-76f75df574-pm852\" (UID: \"a7bc586f-0539-46c3-a4a7-5297cb2ca3b1\") " pod="kube-system/coredns-76f75df574-pm852" Sep 4 18:08:27.932147 kubelet[2664]: I0904 18:08:27.931312 2664 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3b130609-1dd2-44b3-90db-48b3db3330ce-config-volume\") pod \"coredns-76f75df574-jm87r\" (UID: \"3b130609-1dd2-44b3-90db-48b3db3330ce\") " pod="kube-system/coredns-76f75df574-jm87r" Sep 4 18:08:27.932147 kubelet[2664]: I0904 18:08:27.931522 2664 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34cca147-c799-40d7-a541-5d0353aa3f8d-tigera-ca-bundle\") pod \"calico-kube-controllers-7785d6cbd7-wkdsv\" (UID: \"34cca147-c799-40d7-a541-5d0353aa3f8d\") " pod="calico-system/calico-kube-controllers-7785d6cbd7-wkdsv" Sep 4 18:08:27.932147 kubelet[2664]: I0904 18:08:27.931609 2664 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-277np\" (UniqueName: \"kubernetes.io/projected/3b130609-1dd2-44b3-90db-48b3db3330ce-kube-api-access-277np\") pod \"coredns-76f75df574-jm87r\" (UID: \"3b130609-1dd2-44b3-90db-48b3db3330ce\") " pod="kube-system/coredns-76f75df574-jm87r" Sep 4 18:08:27.932147 kubelet[2664]: I0904 18:08:27.931647 2664 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a7bc586f-0539-46c3-a4a7-5297cb2ca3b1-config-volume\") pod \"coredns-76f75df574-pm852\" (UID: \"a7bc586f-0539-46c3-a4a7-5297cb2ca3b1\") " pod="kube-system/coredns-76f75df574-pm852" Sep 4 18:08:27.932147 kubelet[2664]: I0904 18:08:27.931793 2664 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzdl7\" (UniqueName: \"kubernetes.io/projected/34cca147-c799-40d7-a541-5d0353aa3f8d-kube-api-access-lzdl7\") pod \"calico-kube-controllers-7785d6cbd7-wkdsv\" (UID: \"34cca147-c799-40d7-a541-5d0353aa3f8d\") " pod="calico-system/calico-kube-controllers-7785d6cbd7-wkdsv" Sep 4 18:08:28.072142 containerd[1450]: time="2024-09-04T18:08:28.072085101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-pm852,Uid:a7bc586f-0539-46c3-a4a7-5297cb2ca3b1,Namespace:kube-system,Attempt:0,}" Sep 4 18:08:28.095706 containerd[1450]: time="2024-09-04T18:08:28.095416388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-jm87r,Uid:3b130609-1dd2-44b3-90db-48b3db3330ce,Namespace:kube-system,Attempt:0,}" Sep 4 18:08:28.116453 containerd[1450]: time="2024-09-04T18:08:28.116368989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7785d6cbd7-wkdsv,Uid:34cca147-c799-40d7-a541-5d0353aa3f8d,Namespace:calico-system,Attempt:0,}" Sep 4 18:08:28.535913 containerd[1450]: time="2024-09-04T18:08:28.535754176Z" level=error msg="Failed to destroy network for sandbox \"5be49ce2fdd37f26d25b45a72b67fd584bc32a2bc7ae22535930dca65e4a42f8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 18:08:28.540452 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5be49ce2fdd37f26d25b45a72b67fd584bc32a2bc7ae22535930dca65e4a42f8-shm.mount: Deactivated successfully. Sep 4 18:08:28.544525 containerd[1450]: time="2024-09-04T18:08:28.544422034Z" level=error msg="Failed to destroy network for sandbox \"fb316156dd2462ed8f82bf2f36c041260ff7a11c0afdcf57f7c6b7d5ca5642bf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 18:08:28.548827 containerd[1450]: time="2024-09-04T18:08:28.548751465Z" level=error msg="encountered an error cleaning up failed sandbox \"5be49ce2fdd37f26d25b45a72b67fd584bc32a2bc7ae22535930dca65e4a42f8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 18:08:28.550807 containerd[1450]: time="2024-09-04T18:08:28.549057530Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-jm87r,Uid:3b130609-1dd2-44b3-90db-48b3db3330ce,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5be49ce2fdd37f26d25b45a72b67fd584bc32a2bc7ae22535930dca65e4a42f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 18:08:28.549752 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fb316156dd2462ed8f82bf2f36c041260ff7a11c0afdcf57f7c6b7d5ca5642bf-shm.mount: Deactivated successfully. Sep 4 18:08:28.551044 kubelet[2664]: E0904 18:08:28.549360 2664 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5be49ce2fdd37f26d25b45a72b67fd584bc32a2bc7ae22535930dca65e4a42f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 18:08:28.551044 kubelet[2664]: E0904 18:08:28.549426 2664 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5be49ce2fdd37f26d25b45a72b67fd584bc32a2bc7ae22535930dca65e4a42f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-jm87r" Sep 4 18:08:28.551044 kubelet[2664]: E0904 18:08:28.549455 2664 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5be49ce2fdd37f26d25b45a72b67fd584bc32a2bc7ae22535930dca65e4a42f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-jm87r" Sep 4 18:08:28.551167 kubelet[2664]: E0904 18:08:28.549536 2664 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-jm87r_kube-system(3b130609-1dd2-44b3-90db-48b3db3330ce)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-jm87r_kube-system(3b130609-1dd2-44b3-90db-48b3db3330ce)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5be49ce2fdd37f26d25b45a72b67fd584bc32a2bc7ae22535930dca65e4a42f8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-jm87r" podUID="3b130609-1dd2-44b3-90db-48b3db3330ce" Sep 4 18:08:28.551388 containerd[1450]: time="2024-09-04T18:08:28.551359093Z" level=error msg="Failed to destroy network for sandbox \"c8ffe77d2f8253aeae4bac52c7ca1e81a5043bea603b1c69fdbc3739bfb02850\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 18:08:28.552705 containerd[1450]: time="2024-09-04T18:08:28.551700754Z" level=error msg="encountered an error cleaning up failed sandbox \"fb316156dd2462ed8f82bf2f36c041260ff7a11c0afdcf57f7c6b7d5ca5642bf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 18:08:28.552705 containerd[1450]: time="2024-09-04T18:08:28.551797125Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-pm852,Uid:a7bc586f-0539-46c3-a4a7-5297cb2ca3b1,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fb316156dd2462ed8f82bf2f36c041260ff7a11c0afdcf57f7c6b7d5ca5642bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 18:08:28.553090 containerd[1450]: time="2024-09-04T18:08:28.553055949Z" level=error msg="encountered an error cleaning up failed sandbox \"c8ffe77d2f8253aeae4bac52c7ca1e81a5043bea603b1c69fdbc3739bfb02850\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 18:08:28.555693 kubelet[2664]: E0904 18:08:28.555061 2664 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb316156dd2462ed8f82bf2f36c041260ff7a11c0afdcf57f7c6b7d5ca5642bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 18:08:28.557443 containerd[1450]: time="2024-09-04T18:08:28.555804892Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7785d6cbd7-wkdsv,Uid:34cca147-c799-40d7-a541-5d0353aa3f8d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c8ffe77d2f8253aeae4bac52c7ca1e81a5043bea603b1c69fdbc3739bfb02850\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 18:08:28.556404 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c8ffe77d2f8253aeae4bac52c7ca1e81a5043bea603b1c69fdbc3739bfb02850-shm.mount: Deactivated successfully. Sep 4 18:08:28.557863 kubelet[2664]: E0904 18:08:28.555936 2664 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb316156dd2462ed8f82bf2f36c041260ff7a11c0afdcf57f7c6b7d5ca5642bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-pm852" Sep 4 18:08:28.557863 kubelet[2664]: E0904 18:08:28.556083 2664 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb316156dd2462ed8f82bf2f36c041260ff7a11c0afdcf57f7c6b7d5ca5642bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-pm852" Sep 4 18:08:28.557863 kubelet[2664]: E0904 18:08:28.556197 2664 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-pm852_kube-system(a7bc586f-0539-46c3-a4a7-5297cb2ca3b1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-pm852_kube-system(a7bc586f-0539-46c3-a4a7-5297cb2ca3b1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fb316156dd2462ed8f82bf2f36c041260ff7a11c0afdcf57f7c6b7d5ca5642bf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-pm852" podUID="a7bc586f-0539-46c3-a4a7-5297cb2ca3b1" Sep 4 18:08:28.559087 kubelet[2664]: E0904 18:08:28.558279 2664 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8ffe77d2f8253aeae4bac52c7ca1e81a5043bea603b1c69fdbc3739bfb02850\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 18:08:28.559087 kubelet[2664]: E0904 18:08:28.558326 2664 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8ffe77d2f8253aeae4bac52c7ca1e81a5043bea603b1c69fdbc3739bfb02850\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7785d6cbd7-wkdsv" Sep 4 18:08:28.559087 kubelet[2664]: E0904 18:08:28.558351 2664 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8ffe77d2f8253aeae4bac52c7ca1e81a5043bea603b1c69fdbc3739bfb02850\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7785d6cbd7-wkdsv" Sep 4 18:08:28.560234 kubelet[2664]: E0904 18:08:28.558405 2664 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7785d6cbd7-wkdsv_calico-system(34cca147-c799-40d7-a541-5d0353aa3f8d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7785d6cbd7-wkdsv_calico-system(34cca147-c799-40d7-a541-5d0353aa3f8d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c8ffe77d2f8253aeae4bac52c7ca1e81a5043bea603b1c69fdbc3739bfb02850\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7785d6cbd7-wkdsv" podUID="34cca147-c799-40d7-a541-5d0353aa3f8d" Sep 4 18:08:28.784225 kubelet[2664]: I0904 18:08:28.784058 2664 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8ffe77d2f8253aeae4bac52c7ca1e81a5043bea603b1c69fdbc3739bfb02850" Sep 4 18:08:28.786115 containerd[1450]: time="2024-09-04T18:08:28.785966933Z" level=info msg="StopPodSandbox for \"c8ffe77d2f8253aeae4bac52c7ca1e81a5043bea603b1c69fdbc3739bfb02850\"" Sep 4 18:08:28.789925 containerd[1450]: time="2024-09-04T18:08:28.789161483Z" level=info msg="Ensure that sandbox c8ffe77d2f8253aeae4bac52c7ca1e81a5043bea603b1c69fdbc3739bfb02850 in task-service has been cleanup successfully" Sep 4 18:08:28.804685 kubelet[2664]: I0904 18:08:28.804362 2664 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb316156dd2462ed8f82bf2f36c041260ff7a11c0afdcf57f7c6b7d5ca5642bf" Sep 4 18:08:28.831518 containerd[1450]: time="2024-09-04T18:08:28.830642527Z" level=info msg="StopPodSandbox for \"fb316156dd2462ed8f82bf2f36c041260ff7a11c0afdcf57f7c6b7d5ca5642bf\"" Sep 4 18:08:28.831518 containerd[1450]: time="2024-09-04T18:08:28.830998095Z" level=info msg="Ensure that sandbox fb316156dd2462ed8f82bf2f36c041260ff7a11c0afdcf57f7c6b7d5ca5642bf in task-service has been cleanup successfully" Sep 4 18:08:28.846648 containerd[1450]: time="2024-09-04T18:08:28.836173455Z" level=info msg="StopPodSandbox for \"5be49ce2fdd37f26d25b45a72b67fd584bc32a2bc7ae22535930dca65e4a42f8\"" Sep 4 18:08:28.846648 containerd[1450]: time="2024-09-04T18:08:28.836579869Z" level=info msg="Ensure that sandbox 5be49ce2fdd37f26d25b45a72b67fd584bc32a2bc7ae22535930dca65e4a42f8 in task-service has been cleanup successfully" Sep 4 18:08:28.846843 kubelet[2664]: I0904 18:08:28.834881 2664 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5be49ce2fdd37f26d25b45a72b67fd584bc32a2bc7ae22535930dca65e4a42f8" Sep 4 18:08:28.890809 containerd[1450]: time="2024-09-04T18:08:28.890644005Z" level=error msg="StopPodSandbox for \"c8ffe77d2f8253aeae4bac52c7ca1e81a5043bea603b1c69fdbc3739bfb02850\" failed" error="failed to destroy network for sandbox \"c8ffe77d2f8253aeae4bac52c7ca1e81a5043bea603b1c69fdbc3739bfb02850\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 18:08:28.893699 kubelet[2664]: E0904 18:08:28.893026 2664 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c8ffe77d2f8253aeae4bac52c7ca1e81a5043bea603b1c69fdbc3739bfb02850\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c8ffe77d2f8253aeae4bac52c7ca1e81a5043bea603b1c69fdbc3739bfb02850" Sep 4 18:08:28.893699 kubelet[2664]: E0904 18:08:28.893098 2664 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c8ffe77d2f8253aeae4bac52c7ca1e81a5043bea603b1c69fdbc3739bfb02850"} Sep 4 18:08:28.893699 kubelet[2664]: E0904 18:08:28.893154 2664 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"34cca147-c799-40d7-a541-5d0353aa3f8d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c8ffe77d2f8253aeae4bac52c7ca1e81a5043bea603b1c69fdbc3739bfb02850\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 18:08:28.893699 kubelet[2664]: E0904 18:08:28.893198 2664 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"34cca147-c799-40d7-a541-5d0353aa3f8d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c8ffe77d2f8253aeae4bac52c7ca1e81a5043bea603b1c69fdbc3739bfb02850\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7785d6cbd7-wkdsv" podUID="34cca147-c799-40d7-a541-5d0353aa3f8d" Sep 4 18:08:28.914776 containerd[1450]: time="2024-09-04T18:08:28.914636785Z" level=error msg="StopPodSandbox for \"5be49ce2fdd37f26d25b45a72b67fd584bc32a2bc7ae22535930dca65e4a42f8\" failed" error="failed to destroy network for sandbox \"5be49ce2fdd37f26d25b45a72b67fd584bc32a2bc7ae22535930dca65e4a42f8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 18:08:28.915528 kubelet[2664]: E0904 18:08:28.915084 2664 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5be49ce2fdd37f26d25b45a72b67fd584bc32a2bc7ae22535930dca65e4a42f8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5be49ce2fdd37f26d25b45a72b67fd584bc32a2bc7ae22535930dca65e4a42f8" Sep 4 18:08:28.915528 kubelet[2664]: E0904 18:08:28.915142 2664 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5be49ce2fdd37f26d25b45a72b67fd584bc32a2bc7ae22535930dca65e4a42f8"} Sep 4 18:08:28.915528 kubelet[2664]: E0904 18:08:28.915199 2664 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3b130609-1dd2-44b3-90db-48b3db3330ce\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5be49ce2fdd37f26d25b45a72b67fd584bc32a2bc7ae22535930dca65e4a42f8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 18:08:28.915528 kubelet[2664]: E0904 18:08:28.915248 2664 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3b130609-1dd2-44b3-90db-48b3db3330ce\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5be49ce2fdd37f26d25b45a72b67fd584bc32a2bc7ae22535930dca65e4a42f8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-jm87r" podUID="3b130609-1dd2-44b3-90db-48b3db3330ce" Sep 4 18:08:28.999145 kubelet[2664]: E0904 18:08:28.968276 2664 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fb316156dd2462ed8f82bf2f36c041260ff7a11c0afdcf57f7c6b7d5ca5642bf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fb316156dd2462ed8f82bf2f36c041260ff7a11c0afdcf57f7c6b7d5ca5642bf" Sep 4 18:08:28.999145 kubelet[2664]: E0904 18:08:28.968333 2664 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fb316156dd2462ed8f82bf2f36c041260ff7a11c0afdcf57f7c6b7d5ca5642bf"} Sep 4 18:08:28.999145 kubelet[2664]: E0904 18:08:28.968394 2664 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a7bc586f-0539-46c3-a4a7-5297cb2ca3b1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fb316156dd2462ed8f82bf2f36c041260ff7a11c0afdcf57f7c6b7d5ca5642bf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 18:08:28.999145 kubelet[2664]: E0904 18:08:28.968433 2664 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a7bc586f-0539-46c3-a4a7-5297cb2ca3b1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fb316156dd2462ed8f82bf2f36c041260ff7a11c0afdcf57f7c6b7d5ca5642bf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-pm852" podUID="a7bc586f-0539-46c3-a4a7-5297cb2ca3b1" Sep 4 18:08:28.999347 containerd[1450]: time="2024-09-04T18:08:28.967987933Z" level=error msg="StopPodSandbox for \"fb316156dd2462ed8f82bf2f36c041260ff7a11c0afdcf57f7c6b7d5ca5642bf\" failed" error="failed to destroy network for sandbox \"fb316156dd2462ed8f82bf2f36c041260ff7a11c0afdcf57f7c6b7d5ca5642bf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 18:08:29.566342 systemd[1]: Created slice kubepods-besteffort-podcd6b6d64_badf_4e22_9ca4_6086c67f1ef2.slice - libcontainer container kubepods-besteffort-podcd6b6d64_badf_4e22_9ca4_6086c67f1ef2.slice. Sep 4 18:08:29.574365 containerd[1450]: time="2024-09-04T18:08:29.574239398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7r577,Uid:cd6b6d64-badf-4e22-9ca4-6086c67f1ef2,Namespace:calico-system,Attempt:0,}" Sep 4 18:08:29.736133 containerd[1450]: time="2024-09-04T18:08:29.736032396Z" level=error msg="Failed to destroy network for sandbox \"991bfe5e3832bfa15ebaa43599e1810058343de6981695890994ba177d479285\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 18:08:29.739014 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-991bfe5e3832bfa15ebaa43599e1810058343de6981695890994ba177d479285-shm.mount: Deactivated successfully. Sep 4 18:08:29.739913 containerd[1450]: time="2024-09-04T18:08:29.739065834Z" level=error msg="encountered an error cleaning up failed sandbox \"991bfe5e3832bfa15ebaa43599e1810058343de6981695890994ba177d479285\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 18:08:29.739913 containerd[1450]: time="2024-09-04T18:08:29.739153388Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7r577,Uid:cd6b6d64-badf-4e22-9ca4-6086c67f1ef2,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"991bfe5e3832bfa15ebaa43599e1810058343de6981695890994ba177d479285\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 18:08:29.740218 kubelet[2664]: E0904 18:08:29.740162 2664 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"991bfe5e3832bfa15ebaa43599e1810058343de6981695890994ba177d479285\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 18:08:29.740288 kubelet[2664]: E0904 18:08:29.740260 2664 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"991bfe5e3832bfa15ebaa43599e1810058343de6981695890994ba177d479285\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7r577" Sep 4 18:08:29.740325 kubelet[2664]: E0904 18:08:29.740312 2664 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"991bfe5e3832bfa15ebaa43599e1810058343de6981695890994ba177d479285\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7r577" Sep 4 18:08:29.740720 kubelet[2664]: E0904 18:08:29.740429 2664 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7r577_calico-system(cd6b6d64-badf-4e22-9ca4-6086c67f1ef2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7r577_calico-system(cd6b6d64-badf-4e22-9ca4-6086c67f1ef2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"991bfe5e3832bfa15ebaa43599e1810058343de6981695890994ba177d479285\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7r577" podUID="cd6b6d64-badf-4e22-9ca4-6086c67f1ef2" Sep 4 18:08:29.841821 kubelet[2664]: I0904 18:08:29.841348 2664 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="991bfe5e3832bfa15ebaa43599e1810058343de6981695890994ba177d479285" Sep 4 18:08:29.845827 containerd[1450]: time="2024-09-04T18:08:29.845005659Z" level=info msg="StopPodSandbox for \"991bfe5e3832bfa15ebaa43599e1810058343de6981695890994ba177d479285\"" Sep 4 18:08:29.845827 containerd[1450]: time="2024-09-04T18:08:29.845354585Z" level=info msg="Ensure that sandbox 991bfe5e3832bfa15ebaa43599e1810058343de6981695890994ba177d479285 in task-service has been cleanup successfully" Sep 4 18:08:29.900592 containerd[1450]: time="2024-09-04T18:08:29.900483276Z" level=error msg="StopPodSandbox for \"991bfe5e3832bfa15ebaa43599e1810058343de6981695890994ba177d479285\" failed" error="failed to destroy network for sandbox \"991bfe5e3832bfa15ebaa43599e1810058343de6981695890994ba177d479285\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 18:08:29.901099 kubelet[2664]: E0904 18:08:29.901054 2664 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"991bfe5e3832bfa15ebaa43599e1810058343de6981695890994ba177d479285\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="991bfe5e3832bfa15ebaa43599e1810058343de6981695890994ba177d479285" Sep 4 18:08:29.901189 kubelet[2664]: E0904 18:08:29.901163 2664 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"991bfe5e3832bfa15ebaa43599e1810058343de6981695890994ba177d479285"} Sep 4 18:08:29.901380 kubelet[2664]: E0904 18:08:29.901350 2664 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cd6b6d64-badf-4e22-9ca4-6086c67f1ef2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"991bfe5e3832bfa15ebaa43599e1810058343de6981695890994ba177d479285\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 18:08:29.901479 kubelet[2664]: E0904 18:08:29.901453 2664 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cd6b6d64-badf-4e22-9ca4-6086c67f1ef2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"991bfe5e3832bfa15ebaa43599e1810058343de6981695890994ba177d479285\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7r577" podUID="cd6b6d64-badf-4e22-9ca4-6086c67f1ef2" Sep 4 18:08:32.352701 kubelet[2664]: I0904 18:08:32.352563 2664 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 18:08:37.261741 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2351846024.mount: Deactivated successfully. Sep 4 18:08:37.388446 containerd[1450]: time="2024-09-04T18:08:37.359366441Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.1: active requests=0, bytes read=117873564" Sep 4 18:08:37.389477 containerd[1450]: time="2024-09-04T18:08:37.389363001Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 18:08:37.391941 containerd[1450]: time="2024-09-04T18:08:37.391863615Z" level=info msg="ImageCreate event name:\"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 18:08:37.393537 containerd[1450]: time="2024-09-04T18:08:37.393397505Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 18:08:37.396271 containerd[1450]: time="2024-09-04T18:08:37.394986428Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.1\" with image id \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\", size \"117873426\" in 9.608715643s" Sep 4 18:08:37.396271 containerd[1450]: time="2024-09-04T18:08:37.395050518Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\" returns image reference \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\"" Sep 4 18:08:37.441359 containerd[1450]: time="2024-09-04T18:08:37.441294102Z" level=info msg="CreateContainer within sandbox \"5f83c452fc5fedba37b459574425833546c28997acec43b92ac6345143eb09f5\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 4 18:08:37.489970 containerd[1450]: time="2024-09-04T18:08:37.489818879Z" level=info msg="CreateContainer within sandbox \"5f83c452fc5fedba37b459574425833546c28997acec43b92ac6345143eb09f5\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"24d6ca5424c854da5544134fbcd01408689967003e04afe542ff8a1617a188e6\"" Sep 4 18:08:37.493241 containerd[1450]: time="2024-09-04T18:08:37.491649787Z" level=info msg="StartContainer for \"24d6ca5424c854da5544134fbcd01408689967003e04afe542ff8a1617a188e6\"" Sep 4 18:08:37.709088 systemd[1]: Started cri-containerd-24d6ca5424c854da5544134fbcd01408689967003e04afe542ff8a1617a188e6.scope - libcontainer container 24d6ca5424c854da5544134fbcd01408689967003e04afe542ff8a1617a188e6. Sep 4 18:08:37.769878 containerd[1450]: time="2024-09-04T18:08:37.769552938Z" level=info msg="StartContainer for \"24d6ca5424c854da5544134fbcd01408689967003e04afe542ff8a1617a188e6\" returns successfully" Sep 4 18:08:37.887215 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 4 18:08:37.888226 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 4 18:08:40.553370 containerd[1450]: time="2024-09-04T18:08:40.553182313Z" level=info msg="StopPodSandbox for \"991bfe5e3832bfa15ebaa43599e1810058343de6981695890994ba177d479285\"" Sep 4 18:08:40.754934 kernel: bpftool[3951]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 4 18:08:40.902464 kubelet[2664]: I0904 18:08:40.902327 2664 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-9c6sv" podStartSLOduration=5.202116473 podStartE2EDuration="22.8476174s" podCreationTimestamp="2024-09-04 18:08:18 +0000 UTC" firstStartedPulling="2024-09-04 18:08:19.749946567 +0000 UTC m=+27.485731423" lastFinishedPulling="2024-09-04 18:08:37.395447474 +0000 UTC m=+45.131232350" observedRunningTime="2024-09-04 18:08:38.003931221 +0000 UTC m=+45.739716077" watchObservedRunningTime="2024-09-04 18:08:40.8476174 +0000 UTC m=+48.583402246" Sep 4 18:08:41.136769 systemd-networkd[1346]: vxlan.calico: Link UP Sep 4 18:08:41.136779 systemd-networkd[1346]: vxlan.calico: Gained carrier Sep 4 18:08:41.420866 containerd[1450]: 2024-09-04 18:08:40.846 [INFO][3938] k8s.go 608: Cleaning up netns ContainerID="991bfe5e3832bfa15ebaa43599e1810058343de6981695890994ba177d479285" Sep 4 18:08:41.420866 containerd[1450]: 2024-09-04 18:08:40.847 [INFO][3938] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="991bfe5e3832bfa15ebaa43599e1810058343de6981695890994ba177d479285" iface="eth0" netns="/var/run/netns/cni-9a1d18a7-af7f-e74f-2951-f2908ffe6909" Sep 4 18:08:41.420866 containerd[1450]: 2024-09-04 18:08:40.848 [INFO][3938] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="991bfe5e3832bfa15ebaa43599e1810058343de6981695890994ba177d479285" iface="eth0" netns="/var/run/netns/cni-9a1d18a7-af7f-e74f-2951-f2908ffe6909" Sep 4 18:08:41.420866 containerd[1450]: 2024-09-04 18:08:40.850 [INFO][3938] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="991bfe5e3832bfa15ebaa43599e1810058343de6981695890994ba177d479285" iface="eth0" netns="/var/run/netns/cni-9a1d18a7-af7f-e74f-2951-f2908ffe6909" Sep 4 18:08:41.420866 containerd[1450]: 2024-09-04 18:08:40.850 [INFO][3938] k8s.go 615: Releasing IP address(es) ContainerID="991bfe5e3832bfa15ebaa43599e1810058343de6981695890994ba177d479285" Sep 4 18:08:41.420866 containerd[1450]: 2024-09-04 18:08:40.850 [INFO][3938] utils.go 188: Calico CNI releasing IP address ContainerID="991bfe5e3832bfa15ebaa43599e1810058343de6981695890994ba177d479285" Sep 4 18:08:41.420866 containerd[1450]: 2024-09-04 18:08:41.367 [INFO][3952] ipam_plugin.go 417: Releasing address using handleID ContainerID="991bfe5e3832bfa15ebaa43599e1810058343de6981695890994ba177d479285" HandleID="k8s-pod-network.991bfe5e3832bfa15ebaa43599e1810058343de6981695890994ba177d479285" Workload="ci--4054--1--0--c--4d101ae770.novalocal-k8s-csi--node--driver--7r577-eth0" Sep 4 18:08:41.420866 containerd[1450]: 2024-09-04 18:08:41.369 [INFO][3952] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 18:08:41.420866 containerd[1450]: 2024-09-04 18:08:41.373 [INFO][3952] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 18:08:41.420866 containerd[1450]: 2024-09-04 18:08:41.399 [WARNING][3952] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="991bfe5e3832bfa15ebaa43599e1810058343de6981695890994ba177d479285" HandleID="k8s-pod-network.991bfe5e3832bfa15ebaa43599e1810058343de6981695890994ba177d479285" Workload="ci--4054--1--0--c--4d101ae770.novalocal-k8s-csi--node--driver--7r577-eth0" Sep 4 18:08:41.420866 containerd[1450]: 2024-09-04 18:08:41.400 [INFO][3952] ipam_plugin.go 445: Releasing address using workloadID ContainerID="991bfe5e3832bfa15ebaa43599e1810058343de6981695890994ba177d479285" HandleID="k8s-pod-network.991bfe5e3832bfa15ebaa43599e1810058343de6981695890994ba177d479285" Workload="ci--4054--1--0--c--4d101ae770.novalocal-k8s-csi--node--driver--7r577-eth0" Sep 4 18:08:41.420866 containerd[1450]: 2024-09-04 18:08:41.406 [INFO][3952] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 18:08:41.420866 containerd[1450]: 2024-09-04 18:08:41.413 [INFO][3938] k8s.go 621: Teardown processing complete. ContainerID="991bfe5e3832bfa15ebaa43599e1810058343de6981695890994ba177d479285" Sep 4 18:08:41.426937 systemd[1]: run-netns-cni\x2d9a1d18a7\x2daf7f\x2de74f\x2d2951\x2df2908ffe6909.mount: Deactivated successfully. Sep 4 18:08:41.443467 containerd[1450]: time="2024-09-04T18:08:41.443260068Z" level=info msg="TearDown network for sandbox \"991bfe5e3832bfa15ebaa43599e1810058343de6981695890994ba177d479285\" successfully" Sep 4 18:08:41.443467 containerd[1450]: time="2024-09-04T18:08:41.443457770Z" level=info msg="StopPodSandbox for \"991bfe5e3832bfa15ebaa43599e1810058343de6981695890994ba177d479285\" returns successfully" Sep 4 18:08:41.453705 containerd[1450]: time="2024-09-04T18:08:41.453448766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7r577,Uid:cd6b6d64-badf-4e22-9ca4-6086c67f1ef2,Namespace:calico-system,Attempt:1,}" Sep 4 18:08:41.554835 containerd[1450]: time="2024-09-04T18:08:41.554768807Z" level=info msg="StopPodSandbox for \"5be49ce2fdd37f26d25b45a72b67fd584bc32a2bc7ae22535930dca65e4a42f8\"" Sep 4 18:08:41.568610 containerd[1450]: time="2024-09-04T18:08:41.568272648Z" level=info msg="StopPodSandbox for \"fb316156dd2462ed8f82bf2f36c041260ff7a11c0afdcf57f7c6b7d5ca5642bf\"" Sep 4 18:08:41.569547 containerd[1450]: time="2024-09-04T18:08:41.569479444Z" level=info msg="StopPodSandbox for \"c8ffe77d2f8253aeae4bac52c7ca1e81a5043bea603b1c69fdbc3739bfb02850\"" Sep 4 18:08:41.796683 containerd[1450]: 2024-09-04 18:08:41.692 [INFO][4069] k8s.go 608: Cleaning up netns ContainerID="5be49ce2fdd37f26d25b45a72b67fd584bc32a2bc7ae22535930dca65e4a42f8" Sep 4 18:08:41.796683 containerd[1450]: 2024-09-04 18:08:41.693 [INFO][4069] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="5be49ce2fdd37f26d25b45a72b67fd584bc32a2bc7ae22535930dca65e4a42f8" iface="eth0" netns="/var/run/netns/cni-d8338531-7411-939e-7f4a-10cb3ea7b71d" Sep 4 18:08:41.796683 containerd[1450]: 2024-09-04 18:08:41.694 [INFO][4069] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="5be49ce2fdd37f26d25b45a72b67fd584bc32a2bc7ae22535930dca65e4a42f8" iface="eth0" netns="/var/run/netns/cni-d8338531-7411-939e-7f4a-10cb3ea7b71d" Sep 4 18:08:41.796683 containerd[1450]: 2024-09-04 18:08:41.695 [INFO][4069] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="5be49ce2fdd37f26d25b45a72b67fd584bc32a2bc7ae22535930dca65e4a42f8" iface="eth0" netns="/var/run/netns/cni-d8338531-7411-939e-7f4a-10cb3ea7b71d" Sep 4 18:08:41.796683 containerd[1450]: 2024-09-04 18:08:41.695 [INFO][4069] k8s.go 615: Releasing IP address(es) ContainerID="5be49ce2fdd37f26d25b45a72b67fd584bc32a2bc7ae22535930dca65e4a42f8" Sep 4 18:08:41.796683 containerd[1450]: 2024-09-04 18:08:41.695 [INFO][4069] utils.go 188: Calico CNI releasing IP address ContainerID="5be49ce2fdd37f26d25b45a72b67fd584bc32a2bc7ae22535930dca65e4a42f8" Sep 4 18:08:41.796683 containerd[1450]: 2024-09-04 18:08:41.771 [INFO][4093] ipam_plugin.go 417: Releasing address using handleID ContainerID="5be49ce2fdd37f26d25b45a72b67fd584bc32a2bc7ae22535930dca65e4a42f8" HandleID="k8s-pod-network.5be49ce2fdd37f26d25b45a72b67fd584bc32a2bc7ae22535930dca65e4a42f8" Workload="ci--4054--1--0--c--4d101ae770.novalocal-k8s-coredns--76f75df574--jm87r-eth0" Sep 4 18:08:41.796683 containerd[1450]: 2024-09-04 18:08:41.771 [INFO][4093] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 18:08:41.796683 containerd[1450]: 2024-09-04 18:08:41.771 [INFO][4093] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 18:08:41.796683 containerd[1450]: 2024-09-04 18:08:41.785 [WARNING][4093] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="5be49ce2fdd37f26d25b45a72b67fd584bc32a2bc7ae22535930dca65e4a42f8" HandleID="k8s-pod-network.5be49ce2fdd37f26d25b45a72b67fd584bc32a2bc7ae22535930dca65e4a42f8" Workload="ci--4054--1--0--c--4d101ae770.novalocal-k8s-coredns--76f75df574--jm87r-eth0" Sep 4 18:08:41.796683 containerd[1450]: 2024-09-04 18:08:41.785 [INFO][4093] ipam_plugin.go 445: Releasing address using workloadID ContainerID="5be49ce2fdd37f26d25b45a72b67fd584bc32a2bc7ae22535930dca65e4a42f8" HandleID="k8s-pod-network.5be49ce2fdd37f26d25b45a72b67fd584bc32a2bc7ae22535930dca65e4a42f8" Workload="ci--4054--1--0--c--4d101ae770.novalocal-k8s-coredns--76f75df574--jm87r-eth0" Sep 4 18:08:41.796683 containerd[1450]: 2024-09-04 18:08:41.788 [INFO][4093] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 18:08:41.796683 containerd[1450]: 2024-09-04 18:08:41.794 [INFO][4069] k8s.go 621: Teardown processing complete. ContainerID="5be49ce2fdd37f26d25b45a72b67fd584bc32a2bc7ae22535930dca65e4a42f8" Sep 4 18:08:41.802597 containerd[1450]: time="2024-09-04T18:08:41.802019810Z" level=info msg="TearDown network for sandbox \"5be49ce2fdd37f26d25b45a72b67fd584bc32a2bc7ae22535930dca65e4a42f8\" successfully" Sep 4 18:08:41.802597 containerd[1450]: time="2024-09-04T18:08:41.802350571Z" level=info msg="StopPodSandbox for \"5be49ce2fdd37f26d25b45a72b67fd584bc32a2bc7ae22535930dca65e4a42f8\" returns successfully" Sep 4 18:08:41.804885 containerd[1450]: time="2024-09-04T18:08:41.803497434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-jm87r,Uid:3b130609-1dd2-44b3-90db-48b3db3330ce,Namespace:kube-system,Attempt:1,}" Sep 4 18:08:41.804192 systemd[1]: run-netns-cni\x2dd8338531\x2d7411\x2d939e\x2d7f4a\x2d10cb3ea7b71d.mount: Deactivated successfully. Sep 4 18:08:41.854322 containerd[1450]: 2024-09-04 18:08:41.745 [INFO][4061] k8s.go 608: Cleaning up netns ContainerID="fb316156dd2462ed8f82bf2f36c041260ff7a11c0afdcf57f7c6b7d5ca5642bf" Sep 4 18:08:41.854322 containerd[1450]: 2024-09-04 18:08:41.746 [INFO][4061] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="fb316156dd2462ed8f82bf2f36c041260ff7a11c0afdcf57f7c6b7d5ca5642bf" iface="eth0" netns="/var/run/netns/cni-ee77a0a3-6f84-9f98-6c3b-d65e89e2a144" Sep 4 18:08:41.854322 containerd[1450]: 2024-09-04 18:08:41.746 [INFO][4061] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="fb316156dd2462ed8f82bf2f36c041260ff7a11c0afdcf57f7c6b7d5ca5642bf" iface="eth0" netns="/var/run/netns/cni-ee77a0a3-6f84-9f98-6c3b-d65e89e2a144" Sep 4 18:08:41.854322 containerd[1450]: 2024-09-04 18:08:41.747 [INFO][4061] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="fb316156dd2462ed8f82bf2f36c041260ff7a11c0afdcf57f7c6b7d5ca5642bf" iface="eth0" netns="/var/run/netns/cni-ee77a0a3-6f84-9f98-6c3b-d65e89e2a144" Sep 4 18:08:41.854322 containerd[1450]: 2024-09-04 18:08:41.747 [INFO][4061] k8s.go 615: Releasing IP address(es) ContainerID="fb316156dd2462ed8f82bf2f36c041260ff7a11c0afdcf57f7c6b7d5ca5642bf" Sep 4 18:08:41.854322 containerd[1450]: 2024-09-04 18:08:41.747 [INFO][4061] utils.go 188: Calico CNI releasing IP address ContainerID="fb316156dd2462ed8f82bf2f36c041260ff7a11c0afdcf57f7c6b7d5ca5642bf" Sep 4 18:08:41.854322 containerd[1450]: 2024-09-04 18:08:41.815 [INFO][4104] ipam_plugin.go 417: Releasing address using handleID ContainerID="fb316156dd2462ed8f82bf2f36c041260ff7a11c0afdcf57f7c6b7d5ca5642bf" HandleID="k8s-pod-network.fb316156dd2462ed8f82bf2f36c041260ff7a11c0afdcf57f7c6b7d5ca5642bf" Workload="ci--4054--1--0--c--4d101ae770.novalocal-k8s-coredns--76f75df574--pm852-eth0" Sep 4 18:08:41.854322 containerd[1450]: 2024-09-04 18:08:41.820 [INFO][4104] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 18:08:41.854322 containerd[1450]: 2024-09-04 18:08:41.820 [INFO][4104] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 18:08:41.854322 containerd[1450]: 2024-09-04 18:08:41.832 [WARNING][4104] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="fb316156dd2462ed8f82bf2f36c041260ff7a11c0afdcf57f7c6b7d5ca5642bf" HandleID="k8s-pod-network.fb316156dd2462ed8f82bf2f36c041260ff7a11c0afdcf57f7c6b7d5ca5642bf" Workload="ci--4054--1--0--c--4d101ae770.novalocal-k8s-coredns--76f75df574--pm852-eth0" Sep 4 18:08:41.854322 containerd[1450]: 2024-09-04 18:08:41.832 [INFO][4104] ipam_plugin.go 445: Releasing address using workloadID ContainerID="fb316156dd2462ed8f82bf2f36c041260ff7a11c0afdcf57f7c6b7d5ca5642bf" HandleID="k8s-pod-network.fb316156dd2462ed8f82bf2f36c041260ff7a11c0afdcf57f7c6b7d5ca5642bf" Workload="ci--4054--1--0--c--4d101ae770.novalocal-k8s-coredns--76f75df574--pm852-eth0" Sep 4 18:08:41.854322 containerd[1450]: 2024-09-04 18:08:41.836 [INFO][4104] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 18:08:41.854322 containerd[1450]: 2024-09-04 18:08:41.848 [INFO][4061] k8s.go 621: Teardown processing complete. ContainerID="fb316156dd2462ed8f82bf2f36c041260ff7a11c0afdcf57f7c6b7d5ca5642bf" Sep 4 18:08:41.855625 containerd[1450]: time="2024-09-04T18:08:41.855457741Z" level=info msg="TearDown network for sandbox \"fb316156dd2462ed8f82bf2f36c041260ff7a11c0afdcf57f7c6b7d5ca5642bf\" successfully" Sep 4 18:08:41.855625 containerd[1450]: time="2024-09-04T18:08:41.855497856Z" level=info msg="StopPodSandbox for \"fb316156dd2462ed8f82bf2f36c041260ff7a11c0afdcf57f7c6b7d5ca5642bf\" returns successfully" Sep 4 18:08:41.862527 containerd[1450]: time="2024-09-04T18:08:41.862438506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-pm852,Uid:a7bc586f-0539-46c3-a4a7-5297cb2ca3b1,Namespace:kube-system,Attempt:1,}" Sep 4 18:08:41.868565 containerd[1450]: 2024-09-04 18:08:41.765 [INFO][4070] k8s.go 608: Cleaning up netns ContainerID="c8ffe77d2f8253aeae4bac52c7ca1e81a5043bea603b1c69fdbc3739bfb02850" Sep 4 18:08:41.868565 containerd[1450]: 2024-09-04 18:08:41.766 [INFO][4070] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="c8ffe77d2f8253aeae4bac52c7ca1e81a5043bea603b1c69fdbc3739bfb02850" iface="eth0" netns="/var/run/netns/cni-493d5e5f-de4e-7097-6905-dcb5f42e4b9f" Sep 4 18:08:41.868565 containerd[1450]: 2024-09-04 18:08:41.767 [INFO][4070] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="c8ffe77d2f8253aeae4bac52c7ca1e81a5043bea603b1c69fdbc3739bfb02850" iface="eth0" netns="/var/run/netns/cni-493d5e5f-de4e-7097-6905-dcb5f42e4b9f" Sep 4 18:08:41.868565 containerd[1450]: 2024-09-04 18:08:41.767 [INFO][4070] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="c8ffe77d2f8253aeae4bac52c7ca1e81a5043bea603b1c69fdbc3739bfb02850" iface="eth0" netns="/var/run/netns/cni-493d5e5f-de4e-7097-6905-dcb5f42e4b9f" Sep 4 18:08:41.868565 containerd[1450]: 2024-09-04 18:08:41.767 [INFO][4070] k8s.go 615: Releasing IP address(es) ContainerID="c8ffe77d2f8253aeae4bac52c7ca1e81a5043bea603b1c69fdbc3739bfb02850" Sep 4 18:08:41.868565 containerd[1450]: 2024-09-04 18:08:41.767 [INFO][4070] utils.go 188: Calico CNI releasing IP address ContainerID="c8ffe77d2f8253aeae4bac52c7ca1e81a5043bea603b1c69fdbc3739bfb02850" Sep 4 18:08:41.868565 containerd[1450]: 2024-09-04 18:08:41.825 [INFO][4110] ipam_plugin.go 417: Releasing address using handleID ContainerID="c8ffe77d2f8253aeae4bac52c7ca1e81a5043bea603b1c69fdbc3739bfb02850" HandleID="k8s-pod-network.c8ffe77d2f8253aeae4bac52c7ca1e81a5043bea603b1c69fdbc3739bfb02850" Workload="ci--4054--1--0--c--4d101ae770.novalocal-k8s-calico--kube--controllers--7785d6cbd7--wkdsv-eth0" Sep 4 18:08:41.868565 containerd[1450]: 2024-09-04 18:08:41.825 [INFO][4110] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 18:08:41.868565 containerd[1450]: 2024-09-04 18:08:41.836 [INFO][4110] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 18:08:41.868565 containerd[1450]: 2024-09-04 18:08:41.850 [WARNING][4110] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="c8ffe77d2f8253aeae4bac52c7ca1e81a5043bea603b1c69fdbc3739bfb02850" HandleID="k8s-pod-network.c8ffe77d2f8253aeae4bac52c7ca1e81a5043bea603b1c69fdbc3739bfb02850" Workload="ci--4054--1--0--c--4d101ae770.novalocal-k8s-calico--kube--controllers--7785d6cbd7--wkdsv-eth0" Sep 4 18:08:41.868565 containerd[1450]: 2024-09-04 18:08:41.850 [INFO][4110] ipam_plugin.go 445: Releasing address using workloadID ContainerID="c8ffe77d2f8253aeae4bac52c7ca1e81a5043bea603b1c69fdbc3739bfb02850" HandleID="k8s-pod-network.c8ffe77d2f8253aeae4bac52c7ca1e81a5043bea603b1c69fdbc3739bfb02850" Workload="ci--4054--1--0--c--4d101ae770.novalocal-k8s-calico--kube--controllers--7785d6cbd7--wkdsv-eth0" Sep 4 18:08:41.868565 containerd[1450]: 2024-09-04 18:08:41.853 [INFO][4110] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 18:08:41.868565 containerd[1450]: 2024-09-04 18:08:41.862 [INFO][4070] k8s.go 621: Teardown processing complete. ContainerID="c8ffe77d2f8253aeae4bac52c7ca1e81a5043bea603b1c69fdbc3739bfb02850" Sep 4 18:08:41.870890 containerd[1450]: time="2024-09-04T18:08:41.869272395Z" level=info msg="TearDown network for sandbox \"c8ffe77d2f8253aeae4bac52c7ca1e81a5043bea603b1c69fdbc3739bfb02850\" successfully" Sep 4 18:08:41.870890 containerd[1450]: time="2024-09-04T18:08:41.869328120Z" level=info msg="StopPodSandbox for \"c8ffe77d2f8253aeae4bac52c7ca1e81a5043bea603b1c69fdbc3739bfb02850\" returns successfully" Sep 4 18:08:41.871221 containerd[1450]: time="2024-09-04T18:08:41.870999478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7785d6cbd7-wkdsv,Uid:34cca147-c799-40d7-a541-5d0353aa3f8d,Namespace:calico-system,Attempt:1,}" Sep 4 18:08:42.021052 systemd-networkd[1346]: cali8ac2c5576f2: Link UP Sep 4 18:08:42.021998 systemd-networkd[1346]: cali8ac2c5576f2: Gained carrier Sep 4 18:08:42.077644 containerd[1450]: 2024-09-04 18:08:41.750 [INFO][4079] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4054--1--0--c--4d101ae770.novalocal-k8s-csi--node--driver--7r577-eth0 csi-node-driver- calico-system cd6b6d64-badf-4e22-9ca4-6086c67f1ef2 717 0 2024-09-04 18:08:11 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:78cd84fb8c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ci-4054-1-0-c-4d101ae770.novalocal csi-node-driver-7r577 eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali8ac2c5576f2 [] []}} ContainerID="a4b5ce06179967b9d76649d07088b9162f135faa9b04dfcf5b6112c224a6b54a" Namespace="calico-system" Pod="csi-node-driver-7r577" WorkloadEndpoint="ci--4054--1--0--c--4d101ae770.novalocal-k8s-csi--node--driver--7r577-" Sep 4 18:08:42.077644 containerd[1450]: 2024-09-04 18:08:41.751 [INFO][4079] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a4b5ce06179967b9d76649d07088b9162f135faa9b04dfcf5b6112c224a6b54a" Namespace="calico-system" Pod="csi-node-driver-7r577" WorkloadEndpoint="ci--4054--1--0--c--4d101ae770.novalocal-k8s-csi--node--driver--7r577-eth0" Sep 4 18:08:42.077644 containerd[1450]: 2024-09-04 18:08:41.878 [INFO][4115] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a4b5ce06179967b9d76649d07088b9162f135faa9b04dfcf5b6112c224a6b54a" HandleID="k8s-pod-network.a4b5ce06179967b9d76649d07088b9162f135faa9b04dfcf5b6112c224a6b54a" Workload="ci--4054--1--0--c--4d101ae770.novalocal-k8s-csi--node--driver--7r577-eth0" Sep 4 18:08:42.077644 containerd[1450]: 2024-09-04 18:08:41.919 [INFO][4115] ipam_plugin.go 270: Auto assigning IP ContainerID="a4b5ce06179967b9d76649d07088b9162f135faa9b04dfcf5b6112c224a6b54a" HandleID="k8s-pod-network.a4b5ce06179967b9d76649d07088b9162f135faa9b04dfcf5b6112c224a6b54a" Workload="ci--4054--1--0--c--4d101ae770.novalocal-k8s-csi--node--driver--7r577-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00035e450), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4054-1-0-c-4d101ae770.novalocal", "pod":"csi-node-driver-7r577", "timestamp":"2024-09-04 18:08:41.878674838 +0000 UTC"}, Hostname:"ci-4054-1-0-c-4d101ae770.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 18:08:42.077644 containerd[1450]: 2024-09-04 18:08:41.920 [INFO][4115] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 18:08:42.077644 containerd[1450]: 2024-09-04 18:08:41.920 [INFO][4115] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 18:08:42.077644 containerd[1450]: 2024-09-04 18:08:41.920 [INFO][4115] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4054-1-0-c-4d101ae770.novalocal' Sep 4 18:08:42.077644 containerd[1450]: 2024-09-04 18:08:41.923 [INFO][4115] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a4b5ce06179967b9d76649d07088b9162f135faa9b04dfcf5b6112c224a6b54a" host="ci-4054-1-0-c-4d101ae770.novalocal" Sep 4 18:08:42.077644 containerd[1450]: 2024-09-04 18:08:41.960 [INFO][4115] ipam.go 372: Looking up existing affinities for host host="ci-4054-1-0-c-4d101ae770.novalocal" Sep 4 18:08:42.077644 containerd[1450]: 2024-09-04 18:08:41.974 [INFO][4115] ipam.go 489: Trying affinity for 192.168.60.128/26 host="ci-4054-1-0-c-4d101ae770.novalocal" Sep 4 18:08:42.077644 containerd[1450]: 2024-09-04 18:08:41.979 [INFO][4115] ipam.go 155: Attempting to load block cidr=192.168.60.128/26 host="ci-4054-1-0-c-4d101ae770.novalocal" Sep 4 18:08:42.077644 containerd[1450]: 2024-09-04 18:08:41.986 [INFO][4115] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.60.128/26 host="ci-4054-1-0-c-4d101ae770.novalocal" Sep 4 18:08:42.077644 containerd[1450]: 2024-09-04 18:08:41.986 [INFO][4115] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.60.128/26 handle="k8s-pod-network.a4b5ce06179967b9d76649d07088b9162f135faa9b04dfcf5b6112c224a6b54a" host="ci-4054-1-0-c-4d101ae770.novalocal" Sep 4 18:08:42.077644 containerd[1450]: 2024-09-04 18:08:41.991 [INFO][4115] ipam.go 1685: Creating new handle: k8s-pod-network.a4b5ce06179967b9d76649d07088b9162f135faa9b04dfcf5b6112c224a6b54a Sep 4 18:08:42.077644 containerd[1450]: 2024-09-04 18:08:41.997 [INFO][4115] ipam.go 1203: Writing block in order to claim IPs block=192.168.60.128/26 handle="k8s-pod-network.a4b5ce06179967b9d76649d07088b9162f135faa9b04dfcf5b6112c224a6b54a" host="ci-4054-1-0-c-4d101ae770.novalocal" Sep 4 18:08:42.077644 containerd[1450]: 2024-09-04 18:08:42.008 [INFO][4115] ipam.go 1216: Successfully claimed IPs: [192.168.60.129/26] block=192.168.60.128/26 handle="k8s-pod-network.a4b5ce06179967b9d76649d07088b9162f135faa9b04dfcf5b6112c224a6b54a" host="ci-4054-1-0-c-4d101ae770.novalocal" Sep 4 18:08:42.077644 containerd[1450]: 2024-09-04 18:08:42.008 [INFO][4115] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.60.129/26] handle="k8s-pod-network.a4b5ce06179967b9d76649d07088b9162f135faa9b04dfcf5b6112c224a6b54a" host="ci-4054-1-0-c-4d101ae770.novalocal" Sep 4 18:08:42.077644 containerd[1450]: 2024-09-04 18:08:42.008 [INFO][4115] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 18:08:42.077644 containerd[1450]: 2024-09-04 18:08:42.008 [INFO][4115] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.60.129/26] IPv6=[] ContainerID="a4b5ce06179967b9d76649d07088b9162f135faa9b04dfcf5b6112c224a6b54a" HandleID="k8s-pod-network.a4b5ce06179967b9d76649d07088b9162f135faa9b04dfcf5b6112c224a6b54a" Workload="ci--4054--1--0--c--4d101ae770.novalocal-k8s-csi--node--driver--7r577-eth0" Sep 4 18:08:42.079951 containerd[1450]: 2024-09-04 18:08:42.013 [INFO][4079] k8s.go 386: Populated endpoint ContainerID="a4b5ce06179967b9d76649d07088b9162f135faa9b04dfcf5b6112c224a6b54a" Namespace="calico-system" Pod="csi-node-driver-7r577" WorkloadEndpoint="ci--4054--1--0--c--4d101ae770.novalocal-k8s-csi--node--driver--7r577-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054--1--0--c--4d101ae770.novalocal-k8s-csi--node--driver--7r577-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"cd6b6d64-badf-4e22-9ca4-6086c67f1ef2", ResourceVersion:"717", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 18, 8, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054-1-0-c-4d101ae770.novalocal", ContainerID:"", Pod:"csi-node-driver-7r577", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.60.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali8ac2c5576f2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 18:08:42.079951 containerd[1450]: 2024-09-04 18:08:42.014 [INFO][4079] k8s.go 387: Calico CNI using IPs: [192.168.60.129/32] ContainerID="a4b5ce06179967b9d76649d07088b9162f135faa9b04dfcf5b6112c224a6b54a" Namespace="calico-system" Pod="csi-node-driver-7r577" WorkloadEndpoint="ci--4054--1--0--c--4d101ae770.novalocal-k8s-csi--node--driver--7r577-eth0" Sep 4 18:08:42.079951 containerd[1450]: 2024-09-04 18:08:42.017 [INFO][4079] dataplane_linux.go 68: Setting the host side veth name to cali8ac2c5576f2 ContainerID="a4b5ce06179967b9d76649d07088b9162f135faa9b04dfcf5b6112c224a6b54a" Namespace="calico-system" Pod="csi-node-driver-7r577" WorkloadEndpoint="ci--4054--1--0--c--4d101ae770.novalocal-k8s-csi--node--driver--7r577-eth0" Sep 4 18:08:42.079951 containerd[1450]: 2024-09-04 18:08:42.022 [INFO][4079] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="a4b5ce06179967b9d76649d07088b9162f135faa9b04dfcf5b6112c224a6b54a" Namespace="calico-system" Pod="csi-node-driver-7r577" WorkloadEndpoint="ci--4054--1--0--c--4d101ae770.novalocal-k8s-csi--node--driver--7r577-eth0" Sep 4 18:08:42.079951 containerd[1450]: 2024-09-04 18:08:42.025 [INFO][4079] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a4b5ce06179967b9d76649d07088b9162f135faa9b04dfcf5b6112c224a6b54a" Namespace="calico-system" Pod="csi-node-driver-7r577" WorkloadEndpoint="ci--4054--1--0--c--4d101ae770.novalocal-k8s-csi--node--driver--7r577-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054--1--0--c--4d101ae770.novalocal-k8s-csi--node--driver--7r577-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"cd6b6d64-badf-4e22-9ca4-6086c67f1ef2", ResourceVersion:"717", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 18, 8, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054-1-0-c-4d101ae770.novalocal", ContainerID:"a4b5ce06179967b9d76649d07088b9162f135faa9b04dfcf5b6112c224a6b54a", Pod:"csi-node-driver-7r577", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.60.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali8ac2c5576f2", MAC:"92:00:a2:ca:fd:bc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 18:08:42.079951 containerd[1450]: 2024-09-04 18:08:42.047 [INFO][4079] k8s.go 500: Wrote updated endpoint to datastore ContainerID="a4b5ce06179967b9d76649d07088b9162f135faa9b04dfcf5b6112c224a6b54a" Namespace="calico-system" Pod="csi-node-driver-7r577" WorkloadEndpoint="ci--4054--1--0--c--4d101ae770.novalocal-k8s-csi--node--driver--7r577-eth0" Sep 4 18:08:42.229333 systemd-networkd[1346]: cali1bdd260a71d: Link UP Sep 4 18:08:42.230030 systemd-networkd[1346]: cali1bdd260a71d: Gained carrier Sep 4 18:08:42.238186 containerd[1450]: time="2024-09-04T18:08:42.236714206Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 18:08:42.238186 containerd[1450]: time="2024-09-04T18:08:42.236840654Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 18:08:42.238186 containerd[1450]: time="2024-09-04T18:08:42.238040977Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 18:08:42.243737 containerd[1450]: time="2024-09-04T18:08:42.241579279Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 18:08:42.287466 containerd[1450]: 2024-09-04 18:08:41.923 [INFO][4125] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4054--1--0--c--4d101ae770.novalocal-k8s-coredns--76f75df574--jm87r-eth0 coredns-76f75df574- kube-system 3b130609-1dd2-44b3-90db-48b3db3330ce 724 0 2024-09-04 18:08:04 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4054-1-0-c-4d101ae770.novalocal coredns-76f75df574-jm87r eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1bdd260a71d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="12292800ed16318570cfb4a1ed1329121b3a10d654363c38bd2a2bd516b89efd" Namespace="kube-system" Pod="coredns-76f75df574-jm87r" WorkloadEndpoint="ci--4054--1--0--c--4d101ae770.novalocal-k8s-coredns--76f75df574--jm87r-" Sep 4 18:08:42.287466 containerd[1450]: 2024-09-04 18:08:41.925 [INFO][4125] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="12292800ed16318570cfb4a1ed1329121b3a10d654363c38bd2a2bd516b89efd" Namespace="kube-system" Pod="coredns-76f75df574-jm87r" WorkloadEndpoint="ci--4054--1--0--c--4d101ae770.novalocal-k8s-coredns--76f75df574--jm87r-eth0" Sep 4 18:08:42.287466 containerd[1450]: 2024-09-04 18:08:42.051 [INFO][4138] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="12292800ed16318570cfb4a1ed1329121b3a10d654363c38bd2a2bd516b89efd" HandleID="k8s-pod-network.12292800ed16318570cfb4a1ed1329121b3a10d654363c38bd2a2bd516b89efd" Workload="ci--4054--1--0--c--4d101ae770.novalocal-k8s-coredns--76f75df574--jm87r-eth0" Sep 4 18:08:42.287466 containerd[1450]: 2024-09-04 18:08:42.083 [INFO][4138] ipam_plugin.go 270: Auto assigning IP ContainerID="12292800ed16318570cfb4a1ed1329121b3a10d654363c38bd2a2bd516b89efd" HandleID="k8s-pod-network.12292800ed16318570cfb4a1ed1329121b3a10d654363c38bd2a2bd516b89efd" Workload="ci--4054--1--0--c--4d101ae770.novalocal-k8s-coredns--76f75df574--jm87r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318660), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4054-1-0-c-4d101ae770.novalocal", "pod":"coredns-76f75df574-jm87r", "timestamp":"2024-09-04 18:08:42.051478415 +0000 UTC"}, Hostname:"ci-4054-1-0-c-4d101ae770.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 18:08:42.287466 containerd[1450]: 2024-09-04 18:08:42.086 [INFO][4138] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 18:08:42.287466 containerd[1450]: 2024-09-04 18:08:42.086 [INFO][4138] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 18:08:42.287466 containerd[1450]: 2024-09-04 18:08:42.086 [INFO][4138] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4054-1-0-c-4d101ae770.novalocal' Sep 4 18:08:42.287466 containerd[1450]: 2024-09-04 18:08:42.094 [INFO][4138] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.12292800ed16318570cfb4a1ed1329121b3a10d654363c38bd2a2bd516b89efd" host="ci-4054-1-0-c-4d101ae770.novalocal" Sep 4 18:08:42.287466 containerd[1450]: 2024-09-04 18:08:42.108 [INFO][4138] ipam.go 372: Looking up existing affinities for host host="ci-4054-1-0-c-4d101ae770.novalocal" Sep 4 18:08:42.287466 containerd[1450]: 2024-09-04 18:08:42.131 [INFO][4138] ipam.go 489: Trying affinity for 192.168.60.128/26 host="ci-4054-1-0-c-4d101ae770.novalocal" Sep 4 18:08:42.287466 containerd[1450]: 2024-09-04 18:08:42.138 [INFO][4138] ipam.go 155: Attempting to load block cidr=192.168.60.128/26 host="ci-4054-1-0-c-4d101ae770.novalocal" Sep 4 18:08:42.287466 containerd[1450]: 2024-09-04 18:08:42.149 [INFO][4138] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.60.128/26 host="ci-4054-1-0-c-4d101ae770.novalocal" Sep 4 18:08:42.287466 containerd[1450]: 2024-09-04 18:08:42.152 [INFO][4138] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.60.128/26 handle="k8s-pod-network.12292800ed16318570cfb4a1ed1329121b3a10d654363c38bd2a2bd516b89efd" host="ci-4054-1-0-c-4d101ae770.novalocal" Sep 4 18:08:42.287466 containerd[1450]: 2024-09-04 18:08:42.158 [INFO][4138] ipam.go 1685: Creating new handle: k8s-pod-network.12292800ed16318570cfb4a1ed1329121b3a10d654363c38bd2a2bd516b89efd Sep 4 18:08:42.287466 containerd[1450]: 2024-09-04 18:08:42.186 [INFO][4138] ipam.go 1203: Writing block in order to claim IPs block=192.168.60.128/26 handle="k8s-pod-network.12292800ed16318570cfb4a1ed1329121b3a10d654363c38bd2a2bd516b89efd" host="ci-4054-1-0-c-4d101ae770.novalocal" Sep 4 18:08:42.287466 containerd[1450]: 2024-09-04 18:08:42.213 [INFO][4138] ipam.go 1216: Successfully claimed IPs: [192.168.60.130/26] block=192.168.60.128/26 handle="k8s-pod-network.12292800ed16318570cfb4a1ed1329121b3a10d654363c38bd2a2bd516b89efd" host="ci-4054-1-0-c-4d101ae770.novalocal" Sep 4 18:08:42.287466 containerd[1450]: 2024-09-04 18:08:42.213 [INFO][4138] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.60.130/26] handle="k8s-pod-network.12292800ed16318570cfb4a1ed1329121b3a10d654363c38bd2a2bd516b89efd" host="ci-4054-1-0-c-4d101ae770.novalocal" Sep 4 18:08:42.287466 containerd[1450]: 2024-09-04 18:08:42.214 [INFO][4138] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 18:08:42.287466 containerd[1450]: 2024-09-04 18:08:42.214 [INFO][4138] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.60.130/26] IPv6=[] ContainerID="12292800ed16318570cfb4a1ed1329121b3a10d654363c38bd2a2bd516b89efd" HandleID="k8s-pod-network.12292800ed16318570cfb4a1ed1329121b3a10d654363c38bd2a2bd516b89efd" Workload="ci--4054--1--0--c--4d101ae770.novalocal-k8s-coredns--76f75df574--jm87r-eth0" Sep 4 18:08:42.289171 containerd[1450]: 2024-09-04 18:08:42.221 [INFO][4125] k8s.go 386: Populated endpoint ContainerID="12292800ed16318570cfb4a1ed1329121b3a10d654363c38bd2a2bd516b89efd" Namespace="kube-system" Pod="coredns-76f75df574-jm87r" WorkloadEndpoint="ci--4054--1--0--c--4d101ae770.novalocal-k8s-coredns--76f75df574--jm87r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054--1--0--c--4d101ae770.novalocal-k8s-coredns--76f75df574--jm87r-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"3b130609-1dd2-44b3-90db-48b3db3330ce", ResourceVersion:"724", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 18, 8, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054-1-0-c-4d101ae770.novalocal", ContainerID:"", Pod:"coredns-76f75df574-jm87r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.60.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1bdd260a71d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 18:08:42.289171 containerd[1450]: 2024-09-04 18:08:42.222 [INFO][4125] k8s.go 387: Calico CNI using IPs: [192.168.60.130/32] ContainerID="12292800ed16318570cfb4a1ed1329121b3a10d654363c38bd2a2bd516b89efd" Namespace="kube-system" Pod="coredns-76f75df574-jm87r" WorkloadEndpoint="ci--4054--1--0--c--4d101ae770.novalocal-k8s-coredns--76f75df574--jm87r-eth0" Sep 4 18:08:42.289171 containerd[1450]: 2024-09-04 18:08:42.222 [INFO][4125] dataplane_linux.go 68: Setting the host side veth name to cali1bdd260a71d ContainerID="12292800ed16318570cfb4a1ed1329121b3a10d654363c38bd2a2bd516b89efd" Namespace="kube-system" Pod="coredns-76f75df574-jm87r" WorkloadEndpoint="ci--4054--1--0--c--4d101ae770.novalocal-k8s-coredns--76f75df574--jm87r-eth0" Sep 4 18:08:42.289171 containerd[1450]: 2024-09-04 18:08:42.230 [INFO][4125] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="12292800ed16318570cfb4a1ed1329121b3a10d654363c38bd2a2bd516b89efd" Namespace="kube-system" Pod="coredns-76f75df574-jm87r" WorkloadEndpoint="ci--4054--1--0--c--4d101ae770.novalocal-k8s-coredns--76f75df574--jm87r-eth0" Sep 4 18:08:42.289171 containerd[1450]: 2024-09-04 18:08:42.233 [INFO][4125] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="12292800ed16318570cfb4a1ed1329121b3a10d654363c38bd2a2bd516b89efd" Namespace="kube-system" Pod="coredns-76f75df574-jm87r" WorkloadEndpoint="ci--4054--1--0--c--4d101ae770.novalocal-k8s-coredns--76f75df574--jm87r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054--1--0--c--4d101ae770.novalocal-k8s-coredns--76f75df574--jm87r-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"3b130609-1dd2-44b3-90db-48b3db3330ce", ResourceVersion:"724", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 18, 8, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054-1-0-c-4d101ae770.novalocal", ContainerID:"12292800ed16318570cfb4a1ed1329121b3a10d654363c38bd2a2bd516b89efd", Pod:"coredns-76f75df574-jm87r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.60.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1bdd260a71d", MAC:"4e:a0:f6:b8:7b:03", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 18:08:42.289171 containerd[1450]: 2024-09-04 18:08:42.267 [INFO][4125] k8s.go 500: Wrote updated endpoint to datastore ContainerID="12292800ed16318570cfb4a1ed1329121b3a10d654363c38bd2a2bd516b89efd" Namespace="kube-system" Pod="coredns-76f75df574-jm87r" WorkloadEndpoint="ci--4054--1--0--c--4d101ae770.novalocal-k8s-coredns--76f75df574--jm87r-eth0" Sep 4 18:08:42.319947 systemd[1]: Started cri-containerd-a4b5ce06179967b9d76649d07088b9162f135faa9b04dfcf5b6112c224a6b54a.scope - libcontainer container a4b5ce06179967b9d76649d07088b9162f135faa9b04dfcf5b6112c224a6b54a. Sep 4 18:08:42.345229 systemd-networkd[1346]: calib52a2d84a54: Link UP Sep 4 18:08:42.346814 systemd-networkd[1346]: calib52a2d84a54: Gained carrier Sep 4 18:08:42.396567 containerd[1450]: time="2024-09-04T18:08:42.394193351Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 18:08:42.396567 containerd[1450]: time="2024-09-04T18:08:42.394259134Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 18:08:42.396567 containerd[1450]: time="2024-09-04T18:08:42.394280324Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 18:08:42.396567 containerd[1450]: time="2024-09-04T18:08:42.394401130Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 18:08:42.406332 containerd[1450]: 2024-09-04 18:08:42.096 [INFO][4142] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4054--1--0--c--4d101ae770.novalocal-k8s-calico--kube--controllers--7785d6cbd7--wkdsv-eth0 calico-kube-controllers-7785d6cbd7- calico-system 34cca147-c799-40d7-a541-5d0353aa3f8d 726 0 2024-09-04 18:08:11 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7785d6cbd7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4054-1-0-c-4d101ae770.novalocal calico-kube-controllers-7785d6cbd7-wkdsv eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calib52a2d84a54 [] []}} ContainerID="4d779a51d8a8325ea4df9a3bf898b1440fdbd16dadeb869aa527186fc1d49477" Namespace="calico-system" Pod="calico-kube-controllers-7785d6cbd7-wkdsv" WorkloadEndpoint="ci--4054--1--0--c--4d101ae770.novalocal-k8s-calico--kube--controllers--7785d6cbd7--wkdsv-" Sep 4 18:08:42.406332 containerd[1450]: 2024-09-04 18:08:42.097 [INFO][4142] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4d779a51d8a8325ea4df9a3bf898b1440fdbd16dadeb869aa527186fc1d49477" Namespace="calico-system" Pod="calico-kube-controllers-7785d6cbd7-wkdsv" WorkloadEndpoint="ci--4054--1--0--c--4d101ae770.novalocal-k8s-calico--kube--controllers--7785d6cbd7--wkdsv-eth0" Sep 4 18:08:42.406332 containerd[1450]: 2024-09-04 18:08:42.197 [INFO][4177] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4d779a51d8a8325ea4df9a3bf898b1440fdbd16dadeb869aa527186fc1d49477" HandleID="k8s-pod-network.4d779a51d8a8325ea4df9a3bf898b1440fdbd16dadeb869aa527186fc1d49477" Workload="ci--4054--1--0--c--4d101ae770.novalocal-k8s-calico--kube--controllers--7785d6cbd7--wkdsv-eth0" Sep 4 18:08:42.406332 containerd[1450]: 2024-09-04 18:08:42.225 [INFO][4177] ipam_plugin.go 270: Auto assigning IP ContainerID="4d779a51d8a8325ea4df9a3bf898b1440fdbd16dadeb869aa527186fc1d49477" HandleID="k8s-pod-network.4d779a51d8a8325ea4df9a3bf898b1440fdbd16dadeb869aa527186fc1d49477" Workload="ci--4054--1--0--c--4d101ae770.novalocal-k8s-calico--kube--controllers--7785d6cbd7--wkdsv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00061b730), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4054-1-0-c-4d101ae770.novalocal", "pod":"calico-kube-controllers-7785d6cbd7-wkdsv", "timestamp":"2024-09-04 18:08:42.197160482 +0000 UTC"}, Hostname:"ci-4054-1-0-c-4d101ae770.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 18:08:42.406332 containerd[1450]: 2024-09-04 18:08:42.225 [INFO][4177] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 18:08:42.406332 containerd[1450]: 2024-09-04 18:08:42.225 [INFO][4177] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 18:08:42.406332 containerd[1450]: 2024-09-04 18:08:42.225 [INFO][4177] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4054-1-0-c-4d101ae770.novalocal' Sep 4 18:08:42.406332 containerd[1450]: 2024-09-04 18:08:42.240 [INFO][4177] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4d779a51d8a8325ea4df9a3bf898b1440fdbd16dadeb869aa527186fc1d49477" host="ci-4054-1-0-c-4d101ae770.novalocal" Sep 4 18:08:42.406332 containerd[1450]: 2024-09-04 18:08:42.254 [INFO][4177] ipam.go 372: Looking up existing affinities for host host="ci-4054-1-0-c-4d101ae770.novalocal" Sep 4 18:08:42.406332 containerd[1450]: 2024-09-04 18:08:42.277 [INFO][4177] ipam.go 489: Trying affinity for 192.168.60.128/26 host="ci-4054-1-0-c-4d101ae770.novalocal" Sep 4 18:08:42.406332 containerd[1450]: 2024-09-04 18:08:42.289 [INFO][4177] ipam.go 155: Attempting to load block cidr=192.168.60.128/26 host="ci-4054-1-0-c-4d101ae770.novalocal" Sep 4 18:08:42.406332 containerd[1450]: 2024-09-04 18:08:42.299 [INFO][4177] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.60.128/26 host="ci-4054-1-0-c-4d101ae770.novalocal" Sep 4 18:08:42.406332 containerd[1450]: 2024-09-04 18:08:42.300 [INFO][4177] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.60.128/26 handle="k8s-pod-network.4d779a51d8a8325ea4df9a3bf898b1440fdbd16dadeb869aa527186fc1d49477" host="ci-4054-1-0-c-4d101ae770.novalocal" Sep 4 18:08:42.406332 containerd[1450]: 2024-09-04 18:08:42.306 [INFO][4177] ipam.go 1685: Creating new handle: k8s-pod-network.4d779a51d8a8325ea4df9a3bf898b1440fdbd16dadeb869aa527186fc1d49477 Sep 4 18:08:42.406332 containerd[1450]: 2024-09-04 18:08:42.318 [INFO][4177] ipam.go 1203: Writing block in order to claim IPs block=192.168.60.128/26 handle="k8s-pod-network.4d779a51d8a8325ea4df9a3bf898b1440fdbd16dadeb869aa527186fc1d49477" host="ci-4054-1-0-c-4d101ae770.novalocal" Sep 4 18:08:42.406332 containerd[1450]: 2024-09-04 18:08:42.333 [INFO][4177] ipam.go 1216: Successfully claimed IPs: [192.168.60.131/26] block=192.168.60.128/26 handle="k8s-pod-network.4d779a51d8a8325ea4df9a3bf898b1440fdbd16dadeb869aa527186fc1d49477" host="ci-4054-1-0-c-4d101ae770.novalocal" Sep 4 18:08:42.406332 containerd[1450]: 2024-09-04 18:08:42.333 [INFO][4177] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.60.131/26] handle="k8s-pod-network.4d779a51d8a8325ea4df9a3bf898b1440fdbd16dadeb869aa527186fc1d49477" host="ci-4054-1-0-c-4d101ae770.novalocal" Sep 4 18:08:42.406332 containerd[1450]: 2024-09-04 18:08:42.333 [INFO][4177] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 18:08:42.406332 containerd[1450]: 2024-09-04 18:08:42.333 [INFO][4177] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.60.131/26] IPv6=[] ContainerID="4d779a51d8a8325ea4df9a3bf898b1440fdbd16dadeb869aa527186fc1d49477" HandleID="k8s-pod-network.4d779a51d8a8325ea4df9a3bf898b1440fdbd16dadeb869aa527186fc1d49477" Workload="ci--4054--1--0--c--4d101ae770.novalocal-k8s-calico--kube--controllers--7785d6cbd7--wkdsv-eth0" Sep 4 18:08:42.407525 containerd[1450]: 2024-09-04 18:08:42.340 [INFO][4142] k8s.go 386: Populated endpoint ContainerID="4d779a51d8a8325ea4df9a3bf898b1440fdbd16dadeb869aa527186fc1d49477" Namespace="calico-system" Pod="calico-kube-controllers-7785d6cbd7-wkdsv" WorkloadEndpoint="ci--4054--1--0--c--4d101ae770.novalocal-k8s-calico--kube--controllers--7785d6cbd7--wkdsv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054--1--0--c--4d101ae770.novalocal-k8s-calico--kube--controllers--7785d6cbd7--wkdsv-eth0", GenerateName:"calico-kube-controllers-7785d6cbd7-", Namespace:"calico-system", SelfLink:"", UID:"34cca147-c799-40d7-a541-5d0353aa3f8d", ResourceVersion:"726", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 18, 8, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7785d6cbd7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054-1-0-c-4d101ae770.novalocal", ContainerID:"", Pod:"calico-kube-controllers-7785d6cbd7-wkdsv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.60.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib52a2d84a54", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 18:08:42.407525 containerd[1450]: 2024-09-04 18:08:42.340 [INFO][4142] k8s.go 387: Calico CNI using IPs: [192.168.60.131/32] ContainerID="4d779a51d8a8325ea4df9a3bf898b1440fdbd16dadeb869aa527186fc1d49477" Namespace="calico-system" Pod="calico-kube-controllers-7785d6cbd7-wkdsv" WorkloadEndpoint="ci--4054--1--0--c--4d101ae770.novalocal-k8s-calico--kube--controllers--7785d6cbd7--wkdsv-eth0" Sep 4 18:08:42.407525 containerd[1450]: 2024-09-04 18:08:42.340 [INFO][4142] dataplane_linux.go 68: Setting the host side veth name to calib52a2d84a54 ContainerID="4d779a51d8a8325ea4df9a3bf898b1440fdbd16dadeb869aa527186fc1d49477" Namespace="calico-system" Pod="calico-kube-controllers-7785d6cbd7-wkdsv" WorkloadEndpoint="ci--4054--1--0--c--4d101ae770.novalocal-k8s-calico--kube--controllers--7785d6cbd7--wkdsv-eth0" Sep 4 18:08:42.407525 containerd[1450]: 2024-09-04 18:08:42.348 [INFO][4142] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="4d779a51d8a8325ea4df9a3bf898b1440fdbd16dadeb869aa527186fc1d49477" Namespace="calico-system" Pod="calico-kube-controllers-7785d6cbd7-wkdsv" WorkloadEndpoint="ci--4054--1--0--c--4d101ae770.novalocal-k8s-calico--kube--controllers--7785d6cbd7--wkdsv-eth0" Sep 4 18:08:42.407525 containerd[1450]: 2024-09-04 18:08:42.384 [INFO][4142] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4d779a51d8a8325ea4df9a3bf898b1440fdbd16dadeb869aa527186fc1d49477" Namespace="calico-system" Pod="calico-kube-controllers-7785d6cbd7-wkdsv" WorkloadEndpoint="ci--4054--1--0--c--4d101ae770.novalocal-k8s-calico--kube--controllers--7785d6cbd7--wkdsv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054--1--0--c--4d101ae770.novalocal-k8s-calico--kube--controllers--7785d6cbd7--wkdsv-eth0", GenerateName:"calico-kube-controllers-7785d6cbd7-", Namespace:"calico-system", SelfLink:"", UID:"34cca147-c799-40d7-a541-5d0353aa3f8d", ResourceVersion:"726", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 18, 8, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7785d6cbd7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054-1-0-c-4d101ae770.novalocal", ContainerID:"4d779a51d8a8325ea4df9a3bf898b1440fdbd16dadeb869aa527186fc1d49477", Pod:"calico-kube-controllers-7785d6cbd7-wkdsv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.60.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib52a2d84a54", MAC:"3a:7e:38:1f:6c:df", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 18:08:42.407525 containerd[1450]: 2024-09-04 18:08:42.401 [INFO][4142] k8s.go 500: Wrote updated endpoint to datastore ContainerID="4d779a51d8a8325ea4df9a3bf898b1440fdbd16dadeb869aa527186fc1d49477" Namespace="calico-system" Pod="calico-kube-controllers-7785d6cbd7-wkdsv" WorkloadEndpoint="ci--4054--1--0--c--4d101ae770.novalocal-k8s-calico--kube--controllers--7785d6cbd7--wkdsv-eth0" Sep 4 18:08:42.448291 systemd[1]: run-netns-cni\x2d493d5e5f\x2dde4e\x2d7097\x2d6905\x2ddcb5f42e4b9f.mount: Deactivated successfully. Sep 4 18:08:42.448398 systemd[1]: run-netns-cni\x2dee77a0a3\x2d6f84\x2d9f98\x2d6c3b\x2dd65e89e2a144.mount: Deactivated successfully. Sep 4 18:08:42.491852 systemd[1]: Started cri-containerd-12292800ed16318570cfb4a1ed1329121b3a10d654363c38bd2a2bd516b89efd.scope - libcontainer container 12292800ed16318570cfb4a1ed1329121b3a10d654363c38bd2a2bd516b89efd. Sep 4 18:08:42.581120 containerd[1450]: time="2024-09-04T18:08:42.580249682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7r577,Uid:cd6b6d64-badf-4e22-9ca4-6086c67f1ef2,Namespace:calico-system,Attempt:1,} returns sandbox id \"a4b5ce06179967b9d76649d07088b9162f135faa9b04dfcf5b6112c224a6b54a\"" Sep 4 18:08:42.598268 systemd-networkd[1346]: cali7928cb9bce0: Link UP Sep 4 18:08:42.598577 systemd-networkd[1346]: cali7928cb9bce0: Gained carrier Sep 4 18:08:42.638744 containerd[1450]: 2024-09-04 18:08:42.194 [INFO][4154] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4054--1--0--c--4d101ae770.novalocal-k8s-coredns--76f75df574--pm852-eth0 coredns-76f75df574- kube-system a7bc586f-0539-46c3-a4a7-5297cb2ca3b1 725 0 2024-09-04 18:08:04 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4054-1-0-c-4d101ae770.novalocal coredns-76f75df574-pm852 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7928cb9bce0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="717766c115383bae73ca9762e95d63f0a3b3e890d8af3e467fd3cf2ab31525b0" Namespace="kube-system" Pod="coredns-76f75df574-pm852" WorkloadEndpoint="ci--4054--1--0--c--4d101ae770.novalocal-k8s-coredns--76f75df574--pm852-" Sep 4 18:08:42.638744 containerd[1450]: 2024-09-04 18:08:42.194 [INFO][4154] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="717766c115383bae73ca9762e95d63f0a3b3e890d8af3e467fd3cf2ab31525b0" Namespace="kube-system" Pod="coredns-76f75df574-pm852" WorkloadEndpoint="ci--4054--1--0--c--4d101ae770.novalocal-k8s-coredns--76f75df574--pm852-eth0" Sep 4 18:08:42.638744 containerd[1450]: 2024-09-04 18:08:42.392 [INFO][4198] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="717766c115383bae73ca9762e95d63f0a3b3e890d8af3e467fd3cf2ab31525b0" HandleID="k8s-pod-network.717766c115383bae73ca9762e95d63f0a3b3e890d8af3e467fd3cf2ab31525b0" Workload="ci--4054--1--0--c--4d101ae770.novalocal-k8s-coredns--76f75df574--pm852-eth0" Sep 4 18:08:42.638744 containerd[1450]: 2024-09-04 18:08:42.433 [INFO][4198] ipam_plugin.go 270: Auto assigning IP ContainerID="717766c115383bae73ca9762e95d63f0a3b3e890d8af3e467fd3cf2ab31525b0" HandleID="k8s-pod-network.717766c115383bae73ca9762e95d63f0a3b3e890d8af3e467fd3cf2ab31525b0" Workload="ci--4054--1--0--c--4d101ae770.novalocal-k8s-coredns--76f75df574--pm852-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002edc50), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4054-1-0-c-4d101ae770.novalocal", "pod":"coredns-76f75df574-pm852", "timestamp":"2024-09-04 18:08:42.392576886 +0000 UTC"}, Hostname:"ci-4054-1-0-c-4d101ae770.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 18:08:42.638744 containerd[1450]: 2024-09-04 18:08:42.436 [INFO][4198] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 18:08:42.638744 containerd[1450]: 2024-09-04 18:08:42.440 [INFO][4198] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 18:08:42.638744 containerd[1450]: 2024-09-04 18:08:42.441 [INFO][4198] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4054-1-0-c-4d101ae770.novalocal' Sep 4 18:08:42.638744 containerd[1450]: 2024-09-04 18:08:42.459 [INFO][4198] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.717766c115383bae73ca9762e95d63f0a3b3e890d8af3e467fd3cf2ab31525b0" host="ci-4054-1-0-c-4d101ae770.novalocal" Sep 4 18:08:42.638744 containerd[1450]: 2024-09-04 18:08:42.498 [INFO][4198] ipam.go 372: Looking up existing affinities for host host="ci-4054-1-0-c-4d101ae770.novalocal" Sep 4 18:08:42.638744 containerd[1450]: 2024-09-04 18:08:42.513 [INFO][4198] ipam.go 489: Trying affinity for 192.168.60.128/26 host="ci-4054-1-0-c-4d101ae770.novalocal" Sep 4 18:08:42.638744 containerd[1450]: 2024-09-04 18:08:42.521 [INFO][4198] ipam.go 155: Attempting to load block cidr=192.168.60.128/26 host="ci-4054-1-0-c-4d101ae770.novalocal" Sep 4 18:08:42.638744 containerd[1450]: 2024-09-04 18:08:42.524 [INFO][4198] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.60.128/26 host="ci-4054-1-0-c-4d101ae770.novalocal" Sep 4 18:08:42.638744 containerd[1450]: 2024-09-04 18:08:42.524 [INFO][4198] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.60.128/26 handle="k8s-pod-network.717766c115383bae73ca9762e95d63f0a3b3e890d8af3e467fd3cf2ab31525b0" host="ci-4054-1-0-c-4d101ae770.novalocal" Sep 4 18:08:42.638744 containerd[1450]: 2024-09-04 18:08:42.527 [INFO][4198] ipam.go 1685: Creating new handle: k8s-pod-network.717766c115383bae73ca9762e95d63f0a3b3e890d8af3e467fd3cf2ab31525b0 Sep 4 18:08:42.638744 containerd[1450]: 2024-09-04 18:08:42.536 [INFO][4198] ipam.go 1203: Writing block in order to claim IPs block=192.168.60.128/26 handle="k8s-pod-network.717766c115383bae73ca9762e95d63f0a3b3e890d8af3e467fd3cf2ab31525b0" host="ci-4054-1-0-c-4d101ae770.novalocal" Sep 4 18:08:42.638744 containerd[1450]: 2024-09-04 18:08:42.548 [INFO][4198] ipam.go 1216: Successfully claimed IPs: [192.168.60.132/26] block=192.168.60.128/26 handle="k8s-pod-network.717766c115383bae73ca9762e95d63f0a3b3e890d8af3e467fd3cf2ab31525b0" host="ci-4054-1-0-c-4d101ae770.novalocal" Sep 4 18:08:42.638744 containerd[1450]: 2024-09-04 18:08:42.548 [INFO][4198] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.60.132/26] handle="k8s-pod-network.717766c115383bae73ca9762e95d63f0a3b3e890d8af3e467fd3cf2ab31525b0" host="ci-4054-1-0-c-4d101ae770.novalocal" Sep 4 18:08:42.638744 containerd[1450]: 2024-09-04 18:08:42.548 [INFO][4198] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 18:08:42.638744 containerd[1450]: 2024-09-04 18:08:42.549 [INFO][4198] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.60.132/26] IPv6=[] ContainerID="717766c115383bae73ca9762e95d63f0a3b3e890d8af3e467fd3cf2ab31525b0" HandleID="k8s-pod-network.717766c115383bae73ca9762e95d63f0a3b3e890d8af3e467fd3cf2ab31525b0" Workload="ci--4054--1--0--c--4d101ae770.novalocal-k8s-coredns--76f75df574--pm852-eth0" Sep 4 18:08:42.640362 containerd[1450]: 2024-09-04 18:08:42.586 [INFO][4154] k8s.go 386: Populated endpoint ContainerID="717766c115383bae73ca9762e95d63f0a3b3e890d8af3e467fd3cf2ab31525b0" Namespace="kube-system" Pod="coredns-76f75df574-pm852" WorkloadEndpoint="ci--4054--1--0--c--4d101ae770.novalocal-k8s-coredns--76f75df574--pm852-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054--1--0--c--4d101ae770.novalocal-k8s-coredns--76f75df574--pm852-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"a7bc586f-0539-46c3-a4a7-5297cb2ca3b1", ResourceVersion:"725", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 18, 8, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054-1-0-c-4d101ae770.novalocal", ContainerID:"", Pod:"coredns-76f75df574-pm852", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.60.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7928cb9bce0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 18:08:42.640362 containerd[1450]: 2024-09-04 18:08:42.586 [INFO][4154] k8s.go 387: Calico CNI using IPs: [192.168.60.132/32] ContainerID="717766c115383bae73ca9762e95d63f0a3b3e890d8af3e467fd3cf2ab31525b0" Namespace="kube-system" Pod="coredns-76f75df574-pm852" WorkloadEndpoint="ci--4054--1--0--c--4d101ae770.novalocal-k8s-coredns--76f75df574--pm852-eth0" Sep 4 18:08:42.640362 containerd[1450]: 2024-09-04 18:08:42.586 [INFO][4154] dataplane_linux.go 68: Setting the host side veth name to cali7928cb9bce0 ContainerID="717766c115383bae73ca9762e95d63f0a3b3e890d8af3e467fd3cf2ab31525b0" Namespace="kube-system" Pod="coredns-76f75df574-pm852" WorkloadEndpoint="ci--4054--1--0--c--4d101ae770.novalocal-k8s-coredns--76f75df574--pm852-eth0" Sep 4 18:08:42.640362 containerd[1450]: 2024-09-04 18:08:42.604 [INFO][4154] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="717766c115383bae73ca9762e95d63f0a3b3e890d8af3e467fd3cf2ab31525b0" Namespace="kube-system" Pod="coredns-76f75df574-pm852" WorkloadEndpoint="ci--4054--1--0--c--4d101ae770.novalocal-k8s-coredns--76f75df574--pm852-eth0" Sep 4 18:08:42.640362 containerd[1450]: 2024-09-04 18:08:42.607 [INFO][4154] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="717766c115383bae73ca9762e95d63f0a3b3e890d8af3e467fd3cf2ab31525b0" Namespace="kube-system" Pod="coredns-76f75df574-pm852" WorkloadEndpoint="ci--4054--1--0--c--4d101ae770.novalocal-k8s-coredns--76f75df574--pm852-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054--1--0--c--4d101ae770.novalocal-k8s-coredns--76f75df574--pm852-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"a7bc586f-0539-46c3-a4a7-5297cb2ca3b1", ResourceVersion:"725", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 18, 8, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054-1-0-c-4d101ae770.novalocal", ContainerID:"717766c115383bae73ca9762e95d63f0a3b3e890d8af3e467fd3cf2ab31525b0", Pod:"coredns-76f75df574-pm852", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.60.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7928cb9bce0", MAC:"52:56:6f:d2:2a:9d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 18:08:42.640362 containerd[1450]: 2024-09-04 18:08:42.625 [INFO][4154] k8s.go 500: Wrote updated endpoint to datastore ContainerID="717766c115383bae73ca9762e95d63f0a3b3e890d8af3e467fd3cf2ab31525b0" Namespace="kube-system" Pod="coredns-76f75df574-pm852" WorkloadEndpoint="ci--4054--1--0--c--4d101ae770.novalocal-k8s-coredns--76f75df574--pm852-eth0" Sep 4 18:08:42.655118 containerd[1450]: time="2024-09-04T18:08:42.653938026Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\"" Sep 4 18:08:42.708416 containerd[1450]: time="2024-09-04T18:08:42.707928923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-jm87r,Uid:3b130609-1dd2-44b3-90db-48b3db3330ce,Namespace:kube-system,Attempt:1,} returns sandbox id \"12292800ed16318570cfb4a1ed1329121b3a10d654363c38bd2a2bd516b89efd\"" Sep 4 18:08:42.710108 containerd[1450]: time="2024-09-04T18:08:42.709750803Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 18:08:42.710108 containerd[1450]: time="2024-09-04T18:08:42.709808271Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 18:08:42.710108 containerd[1450]: time="2024-09-04T18:08:42.709834470Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 18:08:42.711375 containerd[1450]: time="2024-09-04T18:08:42.709945939Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 18:08:42.731536 containerd[1450]: time="2024-09-04T18:08:42.727729565Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 18:08:42.731536 containerd[1450]: time="2024-09-04T18:08:42.729830829Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 18:08:42.731536 containerd[1450]: time="2024-09-04T18:08:42.729850466Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 18:08:42.731536 containerd[1450]: time="2024-09-04T18:08:42.730053337Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 18:08:42.772240 containerd[1450]: time="2024-09-04T18:08:42.770817102Z" level=info msg="CreateContainer within sandbox \"12292800ed16318570cfb4a1ed1329121b3a10d654363c38bd2a2bd516b89efd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 18:08:42.818840 systemd[1]: Started cri-containerd-4d779a51d8a8325ea4df9a3bf898b1440fdbd16dadeb869aa527186fc1d49477.scope - libcontainer container 4d779a51d8a8325ea4df9a3bf898b1440fdbd16dadeb869aa527186fc1d49477. Sep 4 18:08:42.821927 systemd[1]: Started cri-containerd-717766c115383bae73ca9762e95d63f0a3b3e890d8af3e467fd3cf2ab31525b0.scope - libcontainer container 717766c115383bae73ca9762e95d63f0a3b3e890d8af3e467fd3cf2ab31525b0. Sep 4 18:08:42.852583 containerd[1450]: time="2024-09-04T18:08:42.852449840Z" level=info msg="CreateContainer within sandbox \"12292800ed16318570cfb4a1ed1329121b3a10d654363c38bd2a2bd516b89efd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ac651c95bfcef53678a8433217b24d05713c4f101efc47b732f4e721e67c4777\"" Sep 4 18:08:42.855796 containerd[1450]: time="2024-09-04T18:08:42.854553900Z" level=info msg="StartContainer for \"ac651c95bfcef53678a8433217b24d05713c4f101efc47b732f4e721e67c4777\"" Sep 4 18:08:42.903850 systemd[1]: Started cri-containerd-ac651c95bfcef53678a8433217b24d05713c4f101efc47b732f4e721e67c4777.scope - libcontainer container ac651c95bfcef53678a8433217b24d05713c4f101efc47b732f4e721e67c4777. Sep 4 18:08:42.930062 containerd[1450]: time="2024-09-04T18:08:42.929997159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-pm852,Uid:a7bc586f-0539-46c3-a4a7-5297cb2ca3b1,Namespace:kube-system,Attempt:1,} returns sandbox id \"717766c115383bae73ca9762e95d63f0a3b3e890d8af3e467fd3cf2ab31525b0\"" Sep 4 18:08:42.963884 systemd-networkd[1346]: vxlan.calico: Gained IPv6LL Sep 4 18:08:42.981380 containerd[1450]: time="2024-09-04T18:08:42.981322862Z" level=info msg="CreateContainer within sandbox \"717766c115383bae73ca9762e95d63f0a3b3e890d8af3e467fd3cf2ab31525b0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 18:08:43.012392 containerd[1450]: time="2024-09-04T18:08:43.012353446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7785d6cbd7-wkdsv,Uid:34cca147-c799-40d7-a541-5d0353aa3f8d,Namespace:calico-system,Attempt:1,} returns sandbox id \"4d779a51d8a8325ea4df9a3bf898b1440fdbd16dadeb869aa527186fc1d49477\"" Sep 4 18:08:43.039159 containerd[1450]: time="2024-09-04T18:08:43.039109653Z" level=info msg="StartContainer for \"ac651c95bfcef53678a8433217b24d05713c4f101efc47b732f4e721e67c4777\" returns successfully" Sep 4 18:08:43.083374 containerd[1450]: time="2024-09-04T18:08:43.083117907Z" level=info msg="CreateContainer within sandbox \"717766c115383bae73ca9762e95d63f0a3b3e890d8af3e467fd3cf2ab31525b0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c732ce699756286f512eab4f1fa31a457021468fff6feff5158d6046dee1daa2\"" Sep 4 18:08:43.084676 containerd[1450]: time="2024-09-04T18:08:43.084625387Z" level=info msg="StartContainer for \"c732ce699756286f512eab4f1fa31a457021468fff6feff5158d6046dee1daa2\"" Sep 4 18:08:43.124841 systemd[1]: Started cri-containerd-c732ce699756286f512eab4f1fa31a457021468fff6feff5158d6046dee1daa2.scope - libcontainer container c732ce699756286f512eab4f1fa31a457021468fff6feff5158d6046dee1daa2. Sep 4 18:08:43.177702 containerd[1450]: time="2024-09-04T18:08:43.177589574Z" level=info msg="StartContainer for \"c732ce699756286f512eab4f1fa31a457021468fff6feff5158d6046dee1daa2\" returns successfully" Sep 4 18:08:43.476064 systemd-networkd[1346]: calib52a2d84a54: Gained IPv6LL Sep 4 18:08:43.540217 systemd-networkd[1346]: cali1bdd260a71d: Gained IPv6LL Sep 4 18:08:43.796856 systemd-networkd[1346]: cali7928cb9bce0: Gained IPv6LL Sep 4 18:08:43.988853 systemd-networkd[1346]: cali8ac2c5576f2: Gained IPv6LL Sep 4 18:08:44.094599 kubelet[2664]: I0904 18:08:44.092994 2664 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-jm87r" podStartSLOduration=40.092935279 podStartE2EDuration="40.092935279s" podCreationTimestamp="2024-09-04 18:08:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 18:08:44.077428891 +0000 UTC m=+51.813213747" watchObservedRunningTime="2024-09-04 18:08:44.092935279 +0000 UTC m=+51.828720135" Sep 4 18:08:44.115337 kubelet[2664]: I0904 18:08:44.115262 2664 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-pm852" podStartSLOduration=40.115211957 podStartE2EDuration="40.115211957s" podCreationTimestamp="2024-09-04 18:08:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 18:08:44.093887337 +0000 UTC m=+51.829672193" watchObservedRunningTime="2024-09-04 18:08:44.115211957 +0000 UTC m=+51.850996803" Sep 4 18:08:44.878160 containerd[1450]: time="2024-09-04T18:08:44.877760505Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 18:08:44.880264 containerd[1450]: time="2024-09-04T18:08:44.880052157Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.1: active requests=0, bytes read=7642081" Sep 4 18:08:44.882516 containerd[1450]: time="2024-09-04T18:08:44.882254091Z" level=info msg="ImageCreate event name:\"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 18:08:44.886699 containerd[1450]: time="2024-09-04T18:08:44.886611391Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 18:08:44.888560 containerd[1450]: time="2024-09-04T18:08:44.888409977Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.1\" with image id \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\", size \"9134482\" in 2.234396448s" Sep 4 18:08:44.888560 containerd[1450]: time="2024-09-04T18:08:44.888449010Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\" returns image reference \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\"" Sep 4 18:08:44.889816 containerd[1450]: time="2024-09-04T18:08:44.889794997Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\"" Sep 4 18:08:44.895040 containerd[1450]: time="2024-09-04T18:08:44.894902385Z" level=info msg="CreateContainer within sandbox \"a4b5ce06179967b9d76649d07088b9162f135faa9b04dfcf5b6112c224a6b54a\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 4 18:08:44.958719 containerd[1450]: time="2024-09-04T18:08:44.958641997Z" level=info msg="CreateContainer within sandbox \"a4b5ce06179967b9d76649d07088b9162f135faa9b04dfcf5b6112c224a6b54a\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"9ed2f9ea1d9e7295e8bf3ceb25c889409e8a7c3c4e5fba3e217059fcfd5ed550\"" Sep 4 18:08:44.967081 containerd[1450]: time="2024-09-04T18:08:44.967032107Z" level=info msg="StartContainer for \"9ed2f9ea1d9e7295e8bf3ceb25c889409e8a7c3c4e5fba3e217059fcfd5ed550\"" Sep 4 18:08:45.032716 systemd[1]: Started cri-containerd-9ed2f9ea1d9e7295e8bf3ceb25c889409e8a7c3c4e5fba3e217059fcfd5ed550.scope - libcontainer container 9ed2f9ea1d9e7295e8bf3ceb25c889409e8a7c3c4e5fba3e217059fcfd5ed550. Sep 4 18:08:45.092686 containerd[1450]: time="2024-09-04T18:08:45.090611286Z" level=info msg="StartContainer for \"9ed2f9ea1d9e7295e8bf3ceb25c889409e8a7c3c4e5fba3e217059fcfd5ed550\" returns successfully" Sep 4 18:08:46.834567 kubelet[2664]: I0904 18:08:46.833924 2664 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 18:08:47.253311 systemd[1]: run-containerd-runc-k8s.io-24d6ca5424c854da5544134fbcd01408689967003e04afe542ff8a1617a188e6-runc.dLtBCQ.mount: Deactivated successfully. Sep 4 18:08:48.548774 containerd[1450]: time="2024-09-04T18:08:48.548725134Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 18:08:48.550573 containerd[1450]: time="2024-09-04T18:08:48.550243505Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.1: active requests=0, bytes read=33507125" Sep 4 18:08:48.553853 containerd[1450]: time="2024-09-04T18:08:48.553595506Z" level=info msg="ImageCreate event name:\"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 18:08:48.556699 containerd[1450]: time="2024-09-04T18:08:48.556640011Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 18:08:48.558379 containerd[1450]: time="2024-09-04T18:08:48.558343168Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" with image id \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\", size \"34999494\" in 3.668437293s" Sep 4 18:08:48.558439 containerd[1450]: time="2024-09-04T18:08:48.558380478Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" returns image reference \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\"" Sep 4 18:08:48.572783 containerd[1450]: time="2024-09-04T18:08:48.572740393Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\"" Sep 4 18:08:48.583916 containerd[1450]: time="2024-09-04T18:08:48.583685296Z" level=info msg="CreateContainer within sandbox \"4d779a51d8a8325ea4df9a3bf898b1440fdbd16dadeb869aa527186fc1d49477\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 4 18:08:48.636639 containerd[1450]: time="2024-09-04T18:08:48.636590952Z" level=info msg="CreateContainer within sandbox \"4d779a51d8a8325ea4df9a3bf898b1440fdbd16dadeb869aa527186fc1d49477\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"87e9a43dc46fd449571bcbc1d8362878df686b1434562aa1779b6bf99e396a4f\"" Sep 4 18:08:48.637435 containerd[1450]: time="2024-09-04T18:08:48.637240692Z" level=info msg="StartContainer for \"87e9a43dc46fd449571bcbc1d8362878df686b1434562aa1779b6bf99e396a4f\"" Sep 4 18:08:48.668809 systemd[1]: Started cri-containerd-87e9a43dc46fd449571bcbc1d8362878df686b1434562aa1779b6bf99e396a4f.scope - libcontainer container 87e9a43dc46fd449571bcbc1d8362878df686b1434562aa1779b6bf99e396a4f. Sep 4 18:08:48.713920 containerd[1450]: time="2024-09-04T18:08:48.713882402Z" level=info msg="StartContainer for \"87e9a43dc46fd449571bcbc1d8362878df686b1434562aa1779b6bf99e396a4f\" returns successfully" Sep 4 18:08:49.223571 kubelet[2664]: I0904 18:08:49.223525 2664 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7785d6cbd7-wkdsv" podStartSLOduration=32.68557719 podStartE2EDuration="38.223459474s" podCreationTimestamp="2024-09-04 18:08:11 +0000 UTC" firstStartedPulling="2024-09-04 18:08:43.020813877 +0000 UTC m=+50.756598723" lastFinishedPulling="2024-09-04 18:08:48.558696161 +0000 UTC m=+56.294481007" observedRunningTime="2024-09-04 18:08:49.108155888 +0000 UTC m=+56.843940734" watchObservedRunningTime="2024-09-04 18:08:49.223459474 +0000 UTC m=+56.959244320" Sep 4 18:08:50.740955 containerd[1450]: time="2024-09-04T18:08:50.740818286Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 18:08:50.743314 containerd[1450]: time="2024-09-04T18:08:50.743147889Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1: active requests=0, bytes read=12907822" Sep 4 18:08:50.744374 containerd[1450]: time="2024-09-04T18:08:50.744343012Z" level=info msg="ImageCreate event name:\"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 18:08:50.746924 containerd[1450]: time="2024-09-04T18:08:50.746881557Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 18:08:50.748074 containerd[1450]: time="2024-09-04T18:08:50.748040332Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" with image id \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\", size \"14400175\" in 2.175121605s" Sep 4 18:08:50.748142 containerd[1450]: time="2024-09-04T18:08:50.748075237Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" returns image reference \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\"" Sep 4 18:08:50.750993 containerd[1450]: time="2024-09-04T18:08:50.750910860Z" level=info msg="CreateContainer within sandbox \"a4b5ce06179967b9d76649d07088b9162f135faa9b04dfcf5b6112c224a6b54a\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 4 18:08:50.786279 containerd[1450]: time="2024-09-04T18:08:50.786205316Z" level=info msg="CreateContainer within sandbox \"a4b5ce06179967b9d76649d07088b9162f135faa9b04dfcf5b6112c224a6b54a\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"b8416ec74ac234760340517cb7a7abfcb52d14e7a4a339b2beb63ef390d5dcee\"" Sep 4 18:08:50.787687 containerd[1450]: time="2024-09-04T18:08:50.787146784Z" level=info msg="StartContainer for \"b8416ec74ac234760340517cb7a7abfcb52d14e7a4a339b2beb63ef390d5dcee\"" Sep 4 18:08:50.858764 systemd[1]: Started cri-containerd-b8416ec74ac234760340517cb7a7abfcb52d14e7a4a339b2beb63ef390d5dcee.scope - libcontainer container b8416ec74ac234760340517cb7a7abfcb52d14e7a4a339b2beb63ef390d5dcee. Sep 4 18:08:50.902156 containerd[1450]: time="2024-09-04T18:08:50.902082476Z" level=info msg="StartContainer for \"b8416ec74ac234760340517cb7a7abfcb52d14e7a4a339b2beb63ef390d5dcee\" returns successfully" Sep 4 18:08:51.131858 kubelet[2664]: I0904 18:08:51.131290 2664 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-7r577" podStartSLOduration=31.988643491 podStartE2EDuration="40.13111004s" podCreationTimestamp="2024-09-04 18:08:11 +0000 UTC" firstStartedPulling="2024-09-04 18:08:42.606113055 +0000 UTC m=+50.341897901" lastFinishedPulling="2024-09-04 18:08:50.748579594 +0000 UTC m=+58.484364450" observedRunningTime="2024-09-04 18:08:51.113095989 +0000 UTC m=+58.848880835" watchObservedRunningTime="2024-09-04 18:08:51.13111004 +0000 UTC m=+58.866894896" Sep 4 18:08:51.889157 kubelet[2664]: I0904 18:08:51.889038 2664 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 4 18:08:51.909694 kubelet[2664]: I0904 18:08:51.909125 2664 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 4 18:08:51.971133 kubelet[2664]: I0904 18:08:51.970960 2664 topology_manager.go:215] "Topology Admit Handler" podUID="470631f0-1806-4742-9519-047c0edc7c53" podNamespace="calico-apiserver" podName="calico-apiserver-856d45844c-9fcpt" Sep 4 18:08:51.990588 kubelet[2664]: I0904 18:08:51.990538 2664 topology_manager.go:215] "Topology Admit Handler" podUID="778b8fbc-b656-4e94-8a56-5e7fcabc7750" podNamespace="calico-apiserver" podName="calico-apiserver-856d45844c-5k86d" Sep 4 18:08:52.071631 kubelet[2664]: I0904 18:08:52.070981 2664 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/470631f0-1806-4742-9519-047c0edc7c53-calico-apiserver-certs\") pod \"calico-apiserver-856d45844c-9fcpt\" (UID: \"470631f0-1806-4742-9519-047c0edc7c53\") " pod="calico-apiserver/calico-apiserver-856d45844c-9fcpt" Sep 4 18:08:52.071631 kubelet[2664]: I0904 18:08:52.071050 2664 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/778b8fbc-b656-4e94-8a56-5e7fcabc7750-calico-apiserver-certs\") pod \"calico-apiserver-856d45844c-5k86d\" (UID: \"778b8fbc-b656-4e94-8a56-5e7fcabc7750\") " pod="calico-apiserver/calico-apiserver-856d45844c-5k86d" Sep 4 18:08:52.071631 kubelet[2664]: I0904 18:08:52.071081 2664 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtjxd\" (UniqueName: \"kubernetes.io/projected/470631f0-1806-4742-9519-047c0edc7c53-kube-api-access-wtjxd\") pod \"calico-apiserver-856d45844c-9fcpt\" (UID: \"470631f0-1806-4742-9519-047c0edc7c53\") " pod="calico-apiserver/calico-apiserver-856d45844c-9fcpt" Sep 4 18:08:52.071631 kubelet[2664]: I0904 18:08:52.071126 2664 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhrk8\" (UniqueName: \"kubernetes.io/projected/778b8fbc-b656-4e94-8a56-5e7fcabc7750-kube-api-access-bhrk8\") pod \"calico-apiserver-856d45844c-5k86d\" (UID: \"778b8fbc-b656-4e94-8a56-5e7fcabc7750\") " pod="calico-apiserver/calico-apiserver-856d45844c-5k86d" Sep 4 18:08:52.084939 systemd[1]: Created slice kubepods-besteffort-pod778b8fbc_b656_4e94_8a56_5e7fcabc7750.slice - libcontainer container kubepods-besteffort-pod778b8fbc_b656_4e94_8a56_5e7fcabc7750.slice. Sep 4 18:08:52.111060 systemd[1]: Created slice kubepods-besteffort-pod470631f0_1806_4742_9519_047c0edc7c53.slice - libcontainer container kubepods-besteffort-pod470631f0_1806_4742_9519_047c0edc7c53.slice. Sep 4 18:08:52.172955 kubelet[2664]: E0904 18:08:52.172813 2664 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Sep 4 18:08:52.173405 kubelet[2664]: E0904 18:08:52.173359 2664 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Sep 4 18:08:52.178706 kubelet[2664]: E0904 18:08:52.178692 2664 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/470631f0-1806-4742-9519-047c0edc7c53-calico-apiserver-certs podName:470631f0-1806-4742-9519-047c0edc7c53 nodeName:}" failed. No retries permitted until 2024-09-04 18:08:52.672864205 +0000 UTC m=+60.408649051 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/470631f0-1806-4742-9519-047c0edc7c53-calico-apiserver-certs") pod "calico-apiserver-856d45844c-9fcpt" (UID: "470631f0-1806-4742-9519-047c0edc7c53") : secret "calico-apiserver-certs" not found Sep 4 18:08:52.178866 kubelet[2664]: E0904 18:08:52.178855 2664 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/778b8fbc-b656-4e94-8a56-5e7fcabc7750-calico-apiserver-certs podName:778b8fbc-b656-4e94-8a56-5e7fcabc7750 nodeName:}" failed. No retries permitted until 2024-09-04 18:08:52.678838258 +0000 UTC m=+60.414623114 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/778b8fbc-b656-4e94-8a56-5e7fcabc7750-calico-apiserver-certs") pod "calico-apiserver-856d45844c-5k86d" (UID: "778b8fbc-b656-4e94-8a56-5e7fcabc7750") : secret "calico-apiserver-certs" not found Sep 4 18:08:52.569354 containerd[1450]: time="2024-09-04T18:08:52.568056926Z" level=info msg="StopPodSandbox for \"5be49ce2fdd37f26d25b45a72b67fd584bc32a2bc7ae22535930dca65e4a42f8\"" Sep 4 18:08:52.674696 containerd[1450]: 2024-09-04 18:08:52.627 [WARNING][4703] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5be49ce2fdd37f26d25b45a72b67fd584bc32a2bc7ae22535930dca65e4a42f8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054--1--0--c--4d101ae770.novalocal-k8s-coredns--76f75df574--jm87r-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"3b130609-1dd2-44b3-90db-48b3db3330ce", ResourceVersion:"753", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 18, 8, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054-1-0-c-4d101ae770.novalocal", ContainerID:"12292800ed16318570cfb4a1ed1329121b3a10d654363c38bd2a2bd516b89efd", Pod:"coredns-76f75df574-jm87r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.60.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1bdd260a71d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 18:08:52.674696 containerd[1450]: 2024-09-04 18:08:52.627 [INFO][4703] k8s.go 608: Cleaning up netns ContainerID="5be49ce2fdd37f26d25b45a72b67fd584bc32a2bc7ae22535930dca65e4a42f8" Sep 4 18:08:52.674696 containerd[1450]: 2024-09-04 18:08:52.627 [INFO][4703] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="5be49ce2fdd37f26d25b45a72b67fd584bc32a2bc7ae22535930dca65e4a42f8" iface="eth0" netns="" Sep 4 18:08:52.674696 containerd[1450]: 2024-09-04 18:08:52.627 [INFO][4703] k8s.go 615: Releasing IP address(es) ContainerID="5be49ce2fdd37f26d25b45a72b67fd584bc32a2bc7ae22535930dca65e4a42f8" Sep 4 18:08:52.674696 containerd[1450]: 2024-09-04 18:08:52.627 [INFO][4703] utils.go 188: Calico CNI releasing IP address ContainerID="5be49ce2fdd37f26d25b45a72b67fd584bc32a2bc7ae22535930dca65e4a42f8" Sep 4 18:08:52.674696 containerd[1450]: 2024-09-04 18:08:52.662 [INFO][4709] ipam_plugin.go 417: Releasing address using handleID ContainerID="5be49ce2fdd37f26d25b45a72b67fd584bc32a2bc7ae22535930dca65e4a42f8" HandleID="k8s-pod-network.5be49ce2fdd37f26d25b45a72b67fd584bc32a2bc7ae22535930dca65e4a42f8" Workload="ci--4054--1--0--c--4d101ae770.novalocal-k8s-coredns--76f75df574--jm87r-eth0" Sep 4 18:08:52.674696 containerd[1450]: 2024-09-04 18:08:52.662 [INFO][4709] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 18:08:52.674696 containerd[1450]: 2024-09-04 18:08:52.662 [INFO][4709] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 18:08:52.674696 containerd[1450]: 2024-09-04 18:08:52.669 [WARNING][4709] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="5be49ce2fdd37f26d25b45a72b67fd584bc32a2bc7ae22535930dca65e4a42f8" HandleID="k8s-pod-network.5be49ce2fdd37f26d25b45a72b67fd584bc32a2bc7ae22535930dca65e4a42f8" Workload="ci--4054--1--0--c--4d101ae770.novalocal-k8s-coredns--76f75df574--jm87r-eth0" Sep 4 18:08:52.674696 containerd[1450]: 2024-09-04 18:08:52.669 [INFO][4709] ipam_plugin.go 445: Releasing address using workloadID ContainerID="5be49ce2fdd37f26d25b45a72b67fd584bc32a2bc7ae22535930dca65e4a42f8" HandleID="k8s-pod-network.5be49ce2fdd37f26d25b45a72b67fd584bc32a2bc7ae22535930dca65e4a42f8" Workload="ci--4054--1--0--c--4d101ae770.novalocal-k8s-coredns--76f75df574--jm87r-eth0" Sep 4 18:08:52.674696 containerd[1450]: 2024-09-04 18:08:52.671 [INFO][4709] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 18:08:52.674696 containerd[1450]: 2024-09-04 18:08:52.672 [INFO][4703] k8s.go 621: Teardown processing complete. ContainerID="5be49ce2fdd37f26d25b45a72b67fd584bc32a2bc7ae22535930dca65e4a42f8" Sep 4 18:08:52.675191 containerd[1450]: time="2024-09-04T18:08:52.674736623Z" level=info msg="TearDown network for sandbox \"5be49ce2fdd37f26d25b45a72b67fd584bc32a2bc7ae22535930dca65e4a42f8\" successfully" Sep 4 18:08:52.675191 containerd[1450]: time="2024-09-04T18:08:52.674762443Z" level=info msg="StopPodSandbox for \"5be49ce2fdd37f26d25b45a72b67fd584bc32a2bc7ae22535930dca65e4a42f8\" returns successfully" Sep 4 18:08:52.682878 containerd[1450]: time="2024-09-04T18:08:52.682844191Z" level=info msg="RemovePodSandbox for \"5be49ce2fdd37f26d25b45a72b67fd584bc32a2bc7ae22535930dca65e4a42f8\"" Sep 4 18:08:52.682980 containerd[1450]: time="2024-09-04T18:08:52.682889837Z" level=info msg="Forcibly stopping sandbox \"5be49ce2fdd37f26d25b45a72b67fd584bc32a2bc7ae22535930dca65e4a42f8\"" Sep 4 18:08:52.708227 containerd[1450]: time="2024-09-04T18:08:52.707813025Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-856d45844c-5k86d,Uid:778b8fbc-b656-4e94-8a56-5e7fcabc7750,Namespace:calico-apiserver,Attempt:0,}" Sep 4 18:08:52.721602 containerd[1450]: time="2024-09-04T18:08:52.720750407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-856d45844c-9fcpt,Uid:470631f0-1806-4742-9519-047c0edc7c53,Namespace:calico-apiserver,Attempt:0,}" Sep 4 18:08:52.832553 containerd[1450]: 2024-09-04 18:08:52.789 [WARNING][4729] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5be49ce2fdd37f26d25b45a72b67fd584bc32a2bc7ae22535930dca65e4a42f8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054--1--0--c--4d101ae770.novalocal-k8s-coredns--76f75df574--jm87r-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"3b130609-1dd2-44b3-90db-48b3db3330ce", ResourceVersion:"753", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 18, 8, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054-1-0-c-4d101ae770.novalocal", ContainerID:"12292800ed16318570cfb4a1ed1329121b3a10d654363c38bd2a2bd516b89efd", Pod:"coredns-76f75df574-jm87r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.60.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1bdd260a71d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 18:08:52.832553 containerd[1450]: 2024-09-04 18:08:52.789 [INFO][4729] k8s.go 608: Cleaning up netns ContainerID="5be49ce2fdd37f26d25b45a72b67fd584bc32a2bc7ae22535930dca65e4a42f8" Sep 4 18:08:52.832553 containerd[1450]: 2024-09-04 18:08:52.789 [INFO][4729] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="5be49ce2fdd37f26d25b45a72b67fd584bc32a2bc7ae22535930dca65e4a42f8" iface="eth0" netns="" Sep 4 18:08:52.832553 containerd[1450]: 2024-09-04 18:08:52.789 [INFO][4729] k8s.go 615: Releasing IP address(es) ContainerID="5be49ce2fdd37f26d25b45a72b67fd584bc32a2bc7ae22535930dca65e4a42f8" Sep 4 18:08:52.832553 containerd[1450]: 2024-09-04 18:08:52.789 [INFO][4729] utils.go 188: Calico CNI releasing IP address ContainerID="5be49ce2fdd37f26d25b45a72b67fd584bc32a2bc7ae22535930dca65e4a42f8" Sep 4 18:08:52.832553 containerd[1450]: 2024-09-04 18:08:52.817 [INFO][4752] ipam_plugin.go 417: Releasing address using handleID ContainerID="5be49ce2fdd37f26d25b45a72b67fd584bc32a2bc7ae22535930dca65e4a42f8" HandleID="k8s-pod-network.5be49ce2fdd37f26d25b45a72b67fd584bc32a2bc7ae22535930dca65e4a42f8" Workload="ci--4054--1--0--c--4d101ae770.novalocal-k8s-coredns--76f75df574--jm87r-eth0" Sep 4 18:08:52.832553 containerd[1450]: 2024-09-04 18:08:52.817 [INFO][4752] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 18:08:52.832553 containerd[1450]: 2024-09-04 18:08:52.817 [INFO][4752] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 18:08:52.832553 containerd[1450]: 2024-09-04 18:08:52.824 [WARNING][4752] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="5be49ce2fdd37f26d25b45a72b67fd584bc32a2bc7ae22535930dca65e4a42f8" HandleID="k8s-pod-network.5be49ce2fdd37f26d25b45a72b67fd584bc32a2bc7ae22535930dca65e4a42f8" Workload="ci--4054--1--0--c--4d101ae770.novalocal-k8s-coredns--76f75df574--jm87r-eth0" Sep 4 18:08:52.832553 containerd[1450]: 2024-09-04 18:08:52.824 [INFO][4752] ipam_plugin.go 445: Releasing address using workloadID ContainerID="5be49ce2fdd37f26d25b45a72b67fd584bc32a2bc7ae22535930dca65e4a42f8" HandleID="k8s-pod-network.5be49ce2fdd37f26d25b45a72b67fd584bc32a2bc7ae22535930dca65e4a42f8" Workload="ci--4054--1--0--c--4d101ae770.novalocal-k8s-coredns--76f75df574--jm87r-eth0" Sep 4 18:08:52.832553 containerd[1450]: 2024-09-04 18:08:52.827 [INFO][4752] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 18:08:52.832553 containerd[1450]: 2024-09-04 18:08:52.830 [INFO][4729] k8s.go 621: Teardown processing complete. ContainerID="5be49ce2fdd37f26d25b45a72b67fd584bc32a2bc7ae22535930dca65e4a42f8" Sep 4 18:08:52.834002 containerd[1450]: time="2024-09-04T18:08:52.832529787Z" level=info msg="TearDown network for sandbox \"5be49ce2fdd37f26d25b45a72b67fd584bc32a2bc7ae22535930dca65e4a42f8\" successfully" Sep 4 18:08:52.914987 containerd[1450]: time="2024-09-04T18:08:52.914820679Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5be49ce2fdd37f26d25b45a72b67fd584bc32a2bc7ae22535930dca65e4a42f8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 18:08:52.915468 containerd[1450]: time="2024-09-04T18:08:52.915447406Z" level=info msg="RemovePodSandbox \"5be49ce2fdd37f26d25b45a72b67fd584bc32a2bc7ae22535930dca65e4a42f8\" returns successfully" Sep 4 18:08:52.921470 containerd[1450]: time="2024-09-04T18:08:52.921444733Z" level=info msg="StopPodSandbox for \"fb316156dd2462ed8f82bf2f36c041260ff7a11c0afdcf57f7c6b7d5ca5642bf\"" Sep 4 18:08:53.009209 systemd-networkd[1346]: cali7b634f248de: Link UP Sep 4 18:08:53.010478 systemd-networkd[1346]: cali7b634f248de: Gained carrier Sep 4 18:08:53.050812 containerd[1450]: 2024-09-04 18:08:52.849 [INFO][4736] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4054--1--0--c--4d101ae770.novalocal-k8s-calico--apiserver--856d45844c--5k86d-eth0 calico-apiserver-856d45844c- calico-apiserver 778b8fbc-b656-4e94-8a56-5e7fcabc7750 853 0 2024-09-04 18:08:51 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:856d45844c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4054-1-0-c-4d101ae770.novalocal calico-apiserver-856d45844c-5k86d eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7b634f248de [] []}} ContainerID="18a8f19e84446f61f96687cece39d37d59b9f766b54cc2433541816c5e0df6db" Namespace="calico-apiserver" Pod="calico-apiserver-856d45844c-5k86d" WorkloadEndpoint="ci--4054--1--0--c--4d101ae770.novalocal-k8s-calico--apiserver--856d45844c--5k86d-" Sep 4 18:08:53.050812 containerd[1450]: 2024-09-04 18:08:52.849 [INFO][4736] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="18a8f19e84446f61f96687cece39d37d59b9f766b54cc2433541816c5e0df6db" Namespace="calico-apiserver" Pod="calico-apiserver-856d45844c-5k86d" WorkloadEndpoint="ci--4054--1--0--c--4d101ae770.novalocal-k8s-calico--apiserver--856d45844c--5k86d-eth0" Sep 4 18:08:53.050812 containerd[1450]: 2024-09-04 18:08:52.914 [INFO][4767] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="18a8f19e84446f61f96687cece39d37d59b9f766b54cc2433541816c5e0df6db" HandleID="k8s-pod-network.18a8f19e84446f61f96687cece39d37d59b9f766b54cc2433541816c5e0df6db" Workload="ci--4054--1--0--c--4d101ae770.novalocal-k8s-calico--apiserver--856d45844c--5k86d-eth0" Sep 4 18:08:53.050812 containerd[1450]: 2024-09-04 18:08:52.938 [INFO][4767] ipam_plugin.go 270: Auto assigning IP ContainerID="18a8f19e84446f61f96687cece39d37d59b9f766b54cc2433541816c5e0df6db" HandleID="k8s-pod-network.18a8f19e84446f61f96687cece39d37d59b9f766b54cc2433541816c5e0df6db" Workload="ci--4054--1--0--c--4d101ae770.novalocal-k8s-calico--apiserver--856d45844c--5k86d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ba9b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4054-1-0-c-4d101ae770.novalocal", "pod":"calico-apiserver-856d45844c-5k86d", "timestamp":"2024-09-04 18:08:52.914171982 +0000 UTC"}, Hostname:"ci-4054-1-0-c-4d101ae770.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 18:08:53.050812 containerd[1450]: 2024-09-04 18:08:52.938 [INFO][4767] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 18:08:53.050812 containerd[1450]: 2024-09-04 18:08:52.938 [INFO][4767] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 18:08:53.050812 containerd[1450]: 2024-09-04 18:08:52.938 [INFO][4767] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4054-1-0-c-4d101ae770.novalocal' Sep 4 18:08:53.050812 containerd[1450]: 2024-09-04 18:08:52.942 [INFO][4767] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.18a8f19e84446f61f96687cece39d37d59b9f766b54cc2433541816c5e0df6db" host="ci-4054-1-0-c-4d101ae770.novalocal" Sep 4 18:08:53.050812 containerd[1450]: 2024-09-04 18:08:52.949 [INFO][4767] ipam.go 372: Looking up existing affinities for host host="ci-4054-1-0-c-4d101ae770.novalocal" Sep 4 18:08:53.050812 containerd[1450]: 2024-09-04 18:08:52.957 [INFO][4767] ipam.go 489: Trying affinity for 192.168.60.128/26 host="ci-4054-1-0-c-4d101ae770.novalocal" Sep 4 18:08:53.050812 containerd[1450]: 2024-09-04 18:08:52.961 [INFO][4767] ipam.go 155: Attempting to load block cidr=192.168.60.128/26 host="ci-4054-1-0-c-4d101ae770.novalocal" Sep 4 18:08:53.050812 containerd[1450]: 2024-09-04 18:08:52.970 [INFO][4767] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.60.128/26 host="ci-4054-1-0-c-4d101ae770.novalocal" Sep 4 18:08:53.050812 containerd[1450]: 2024-09-04 18:08:52.970 [INFO][4767] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.60.128/26 handle="k8s-pod-network.18a8f19e84446f61f96687cece39d37d59b9f766b54cc2433541816c5e0df6db" host="ci-4054-1-0-c-4d101ae770.novalocal" Sep 4 18:08:53.050812 containerd[1450]: 2024-09-04 18:08:52.972 [INFO][4767] ipam.go 1685: Creating new handle: k8s-pod-network.18a8f19e84446f61f96687cece39d37d59b9f766b54cc2433541816c5e0df6db Sep 4 18:08:53.050812 containerd[1450]: 2024-09-04 18:08:52.982 [INFO][4767] ipam.go 1203: Writing block in order to claim IPs block=192.168.60.128/26 handle="k8s-pod-network.18a8f19e84446f61f96687cece39d37d59b9f766b54cc2433541816c5e0df6db" host="ci-4054-1-0-c-4d101ae770.novalocal" Sep 4 18:08:53.050812 containerd[1450]: 2024-09-04 18:08:52.998 [INFO][4767] ipam.go 1216: Successfully claimed IPs: [192.168.60.133/26] block=192.168.60.128/26 handle="k8s-pod-network.18a8f19e84446f61f96687cece39d37d59b9f766b54cc2433541816c5e0df6db" host="ci-4054-1-0-c-4d101ae770.novalocal" Sep 4 18:08:53.050812 containerd[1450]: 2024-09-04 18:08:52.998 [INFO][4767] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.60.133/26] handle="k8s-pod-network.18a8f19e84446f61f96687cece39d37d59b9f766b54cc2433541816c5e0df6db" host="ci-4054-1-0-c-4d101ae770.novalocal" Sep 4 18:08:53.050812 containerd[1450]: 2024-09-04 18:08:52.998 [INFO][4767] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 18:08:53.050812 containerd[1450]: 2024-09-04 18:08:52.999 [INFO][4767] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.60.133/26] IPv6=[] ContainerID="18a8f19e84446f61f96687cece39d37d59b9f766b54cc2433541816c5e0df6db" HandleID="k8s-pod-network.18a8f19e84446f61f96687cece39d37d59b9f766b54cc2433541816c5e0df6db" Workload="ci--4054--1--0--c--4d101ae770.novalocal-k8s-calico--apiserver--856d45844c--5k86d-eth0" Sep 4 18:08:53.051470 containerd[1450]: 2024-09-04 18:08:53.003 [INFO][4736] k8s.go 386: Populated endpoint ContainerID="18a8f19e84446f61f96687cece39d37d59b9f766b54cc2433541816c5e0df6db" Namespace="calico-apiserver" Pod="calico-apiserver-856d45844c-5k86d" WorkloadEndpoint="ci--4054--1--0--c--4d101ae770.novalocal-k8s-calico--apiserver--856d45844c--5k86d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054--1--0--c--4d101ae770.novalocal-k8s-calico--apiserver--856d45844c--5k86d-eth0", GenerateName:"calico-apiserver-856d45844c-", Namespace:"calico-apiserver", SelfLink:"", UID:"778b8fbc-b656-4e94-8a56-5e7fcabc7750", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 18, 8, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"856d45844c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054-1-0-c-4d101ae770.novalocal", ContainerID:"", Pod:"calico-apiserver-856d45844c-5k86d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.60.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7b634f248de", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 18:08:53.051470 containerd[1450]: 2024-09-04 18:08:53.003 [INFO][4736] k8s.go 387: Calico CNI using IPs: [192.168.60.133/32] ContainerID="18a8f19e84446f61f96687cece39d37d59b9f766b54cc2433541816c5e0df6db" Namespace="calico-apiserver" Pod="calico-apiserver-856d45844c-5k86d" WorkloadEndpoint="ci--4054--1--0--c--4d101ae770.novalocal-k8s-calico--apiserver--856d45844c--5k86d-eth0" Sep 4 18:08:53.051470 containerd[1450]: 2024-09-04 18:08:53.003 [INFO][4736] dataplane_linux.go 68: Setting the host side veth name to cali7b634f248de ContainerID="18a8f19e84446f61f96687cece39d37d59b9f766b54cc2433541816c5e0df6db" Namespace="calico-apiserver" Pod="calico-apiserver-856d45844c-5k86d" WorkloadEndpoint="ci--4054--1--0--c--4d101ae770.novalocal-k8s-calico--apiserver--856d45844c--5k86d-eth0" Sep 4 18:08:53.051470 containerd[1450]: 2024-09-04 18:08:53.009 [INFO][4736] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="18a8f19e84446f61f96687cece39d37d59b9f766b54cc2433541816c5e0df6db" Namespace="calico-apiserver" Pod="calico-apiserver-856d45844c-5k86d" WorkloadEndpoint="ci--4054--1--0--c--4d101ae770.novalocal-k8s-calico--apiserver--856d45844c--5k86d-eth0" Sep 4 18:08:53.051470 containerd[1450]: 2024-09-04 18:08:53.014 [INFO][4736] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="18a8f19e84446f61f96687cece39d37d59b9f766b54cc2433541816c5e0df6db" Namespace="calico-apiserver" Pod="calico-apiserver-856d45844c-5k86d" WorkloadEndpoint="ci--4054--1--0--c--4d101ae770.novalocal-k8s-calico--apiserver--856d45844c--5k86d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054--1--0--c--4d101ae770.novalocal-k8s-calico--apiserver--856d45844c--5k86d-eth0", GenerateName:"calico-apiserver-856d45844c-", Namespace:"calico-apiserver", SelfLink:"", UID:"778b8fbc-b656-4e94-8a56-5e7fcabc7750", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 18, 8, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"856d45844c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054-1-0-c-4d101ae770.novalocal", ContainerID:"18a8f19e84446f61f96687cece39d37d59b9f766b54cc2433541816c5e0df6db", Pod:"calico-apiserver-856d45844c-5k86d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.60.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7b634f248de", MAC:"56:27:3b:07:3e:f7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 18:08:53.051470 containerd[1450]: 2024-09-04 18:08:53.036 [INFO][4736] k8s.go 500: Wrote updated endpoint to datastore ContainerID="18a8f19e84446f61f96687cece39d37d59b9f766b54cc2433541816c5e0df6db" Namespace="calico-apiserver" Pod="calico-apiserver-856d45844c-5k86d" WorkloadEndpoint="ci--4054--1--0--c--4d101ae770.novalocal-k8s-calico--apiserver--856d45844c--5k86d-eth0" Sep 4 18:08:53.114092 containerd[1450]: time="2024-09-04T18:08:53.112524658Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 18:08:53.115721 containerd[1450]: time="2024-09-04T18:08:53.113777950Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 18:08:53.115721 containerd[1450]: time="2024-09-04T18:08:53.114648535Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 18:08:53.115721 containerd[1450]: time="2024-09-04T18:08:53.114770303Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 18:08:53.170370 systemd-networkd[1346]: cali02cc8df41a4: Link UP Sep 4 18:08:53.175175 systemd-networkd[1346]: cali02cc8df41a4: Gained carrier Sep 4 18:08:53.207073 containerd[1450]: 2024-09-04 18:08:52.988 [WARNING][4791] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fb316156dd2462ed8f82bf2f36c041260ff7a11c0afdcf57f7c6b7d5ca5642bf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054--1--0--c--4d101ae770.novalocal-k8s-coredns--76f75df574--pm852-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"a7bc586f-0539-46c3-a4a7-5297cb2ca3b1", ResourceVersion:"755", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 18, 8, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054-1-0-c-4d101ae770.novalocal", ContainerID:"717766c115383bae73ca9762e95d63f0a3b3e890d8af3e467fd3cf2ab31525b0", Pod:"coredns-76f75df574-pm852", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.60.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7928cb9bce0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 18:08:53.207073 containerd[1450]: 2024-09-04 18:08:52.988 [INFO][4791] k8s.go 608: Cleaning up netns ContainerID="fb316156dd2462ed8f82bf2f36c041260ff7a11c0afdcf57f7c6b7d5ca5642bf" Sep 4 18:08:53.207073 containerd[1450]: 2024-09-04 18:08:52.988 [INFO][4791] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="fb316156dd2462ed8f82bf2f36c041260ff7a11c0afdcf57f7c6b7d5ca5642bf" iface="eth0" netns="" Sep 4 18:08:53.207073 containerd[1450]: 2024-09-04 18:08:52.989 [INFO][4791] k8s.go 615: Releasing IP address(es) ContainerID="fb316156dd2462ed8f82bf2f36c041260ff7a11c0afdcf57f7c6b7d5ca5642bf" Sep 4 18:08:53.207073 containerd[1450]: 2024-09-04 18:08:52.989 [INFO][4791] utils.go 188: Calico CNI releasing IP address ContainerID="fb316156dd2462ed8f82bf2f36c041260ff7a11c0afdcf57f7c6b7d5ca5642bf" Sep 4 18:08:53.207073 containerd[1450]: 2024-09-04 18:08:53.084 [INFO][4797] ipam_plugin.go 417: Releasing address using handleID ContainerID="fb316156dd2462ed8f82bf2f36c041260ff7a11c0afdcf57f7c6b7d5ca5642bf" HandleID="k8s-pod-network.fb316156dd2462ed8f82bf2f36c041260ff7a11c0afdcf57f7c6b7d5ca5642bf" Workload="ci--4054--1--0--c--4d101ae770.novalocal-k8s-coredns--76f75df574--pm852-eth0" Sep 4 18:08:53.207073 containerd[1450]: 2024-09-04 18:08:53.086 [INFO][4797] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 18:08:53.207073 containerd[1450]: 2024-09-04 18:08:53.132 [INFO][4797] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 18:08:53.207073 containerd[1450]: 2024-09-04 18:08:53.169 [WARNING][4797] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="fb316156dd2462ed8f82bf2f36c041260ff7a11c0afdcf57f7c6b7d5ca5642bf" HandleID="k8s-pod-network.fb316156dd2462ed8f82bf2f36c041260ff7a11c0afdcf57f7c6b7d5ca5642bf" Workload="ci--4054--1--0--c--4d101ae770.novalocal-k8s-coredns--76f75df574--pm852-eth0" Sep 4 18:08:53.207073 containerd[1450]: 2024-09-04 18:08:53.169 [INFO][4797] ipam_plugin.go 445: Releasing address using workloadID ContainerID="fb316156dd2462ed8f82bf2f36c041260ff7a11c0afdcf57f7c6b7d5ca5642bf" HandleID="k8s-pod-network.fb316156dd2462ed8f82bf2f36c041260ff7a11c0afdcf57f7c6b7d5ca5642bf" Workload="ci--4054--1--0--c--4d101ae770.novalocal-k8s-coredns--76f75df574--pm852-eth0" Sep 4 18:08:53.207073 containerd[1450]: 2024-09-04 18:08:53.191 [INFO][4797] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 18:08:53.207073 containerd[1450]: 2024-09-04 18:08:53.195 [INFO][4791] k8s.go 621: Teardown processing complete. ContainerID="fb316156dd2462ed8f82bf2f36c041260ff7a11c0afdcf57f7c6b7d5ca5642bf" Sep 4 18:08:53.209029 systemd[1]: Started cri-containerd-18a8f19e84446f61f96687cece39d37d59b9f766b54cc2433541816c5e0df6db.scope - libcontainer container 18a8f19e84446f61f96687cece39d37d59b9f766b54cc2433541816c5e0df6db. Sep 4 18:08:53.218842 containerd[1450]: time="2024-09-04T18:08:53.218803280Z" level=info msg="TearDown network for sandbox \"fb316156dd2462ed8f82bf2f36c041260ff7a11c0afdcf57f7c6b7d5ca5642bf\" successfully" Sep 4 18:08:53.219121 containerd[1450]: time="2024-09-04T18:08:53.219007363Z" level=info msg="StopPodSandbox for \"fb316156dd2462ed8f82bf2f36c041260ff7a11c0afdcf57f7c6b7d5ca5642bf\" returns successfully" Sep 4 18:08:53.219480 containerd[1450]: time="2024-09-04T18:08:53.219441127Z" level=info msg="RemovePodSandbox for \"fb316156dd2462ed8f82bf2f36c041260ff7a11c0afdcf57f7c6b7d5ca5642bf\"" Sep 4 18:08:53.219531 containerd[1450]: time="2024-09-04T18:08:53.219479248Z" level=info msg="Forcibly stopping sandbox \"fb316156dd2462ed8f82bf2f36c041260ff7a11c0afdcf57f7c6b7d5ca5642bf\"" Sep 4 18:08:53.234343 containerd[1450]: 2024-09-04 18:08:52.863 [INFO][4747] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4054--1--0--c--4d101ae770.novalocal-k8s-calico--apiserver--856d45844c--9fcpt-eth0 calico-apiserver-856d45844c- calico-apiserver 470631f0-1806-4742-9519-047c0edc7c53 850 0 2024-09-04 18:08:51 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:856d45844c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4054-1-0-c-4d101ae770.novalocal calico-apiserver-856d45844c-9fcpt eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali02cc8df41a4 [] []}} ContainerID="255b709aa280814bb7d998417809b23fa551838c3d4fa43d303626e0f7598068" Namespace="calico-apiserver" Pod="calico-apiserver-856d45844c-9fcpt" WorkloadEndpoint="ci--4054--1--0--c--4d101ae770.novalocal-k8s-calico--apiserver--856d45844c--9fcpt-" Sep 4 18:08:53.234343 containerd[1450]: 2024-09-04 18:08:52.864 [INFO][4747] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="255b709aa280814bb7d998417809b23fa551838c3d4fa43d303626e0f7598068" Namespace="calico-apiserver" Pod="calico-apiserver-856d45844c-9fcpt" WorkloadEndpoint="ci--4054--1--0--c--4d101ae770.novalocal-k8s-calico--apiserver--856d45844c--9fcpt-eth0" Sep 4 18:08:53.234343 containerd[1450]: 2024-09-04 18:08:52.932 [INFO][4771] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="255b709aa280814bb7d998417809b23fa551838c3d4fa43d303626e0f7598068" HandleID="k8s-pod-network.255b709aa280814bb7d998417809b23fa551838c3d4fa43d303626e0f7598068" Workload="ci--4054--1--0--c--4d101ae770.novalocal-k8s-calico--apiserver--856d45844c--9fcpt-eth0" Sep 4 18:08:53.234343 containerd[1450]: 2024-09-04 18:08:52.955 [INFO][4771] ipam_plugin.go 270: Auto assigning IP ContainerID="255b709aa280814bb7d998417809b23fa551838c3d4fa43d303626e0f7598068" HandleID="k8s-pod-network.255b709aa280814bb7d998417809b23fa551838c3d4fa43d303626e0f7598068" Workload="ci--4054--1--0--c--4d101ae770.novalocal-k8s-calico--apiserver--856d45844c--9fcpt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001147a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4054-1-0-c-4d101ae770.novalocal", "pod":"calico-apiserver-856d45844c-9fcpt", "timestamp":"2024-09-04 18:08:52.932977158 +0000 UTC"}, Hostname:"ci-4054-1-0-c-4d101ae770.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 18:08:53.234343 containerd[1450]: 2024-09-04 18:08:52.957 [INFO][4771] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 18:08:53.234343 containerd[1450]: 2024-09-04 18:08:52.999 [INFO][4771] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 18:08:53.234343 containerd[1450]: 2024-09-04 18:08:52.999 [INFO][4771] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4054-1-0-c-4d101ae770.novalocal' Sep 4 18:08:53.234343 containerd[1450]: 2024-09-04 18:08:53.003 [INFO][4771] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.255b709aa280814bb7d998417809b23fa551838c3d4fa43d303626e0f7598068" host="ci-4054-1-0-c-4d101ae770.novalocal" Sep 4 18:08:53.234343 containerd[1450]: 2024-09-04 18:08:53.020 [INFO][4771] ipam.go 372: Looking up existing affinities for host host="ci-4054-1-0-c-4d101ae770.novalocal" Sep 4 18:08:53.234343 containerd[1450]: 2024-09-04 18:08:53.029 [INFO][4771] ipam.go 489: Trying affinity for 192.168.60.128/26 host="ci-4054-1-0-c-4d101ae770.novalocal" Sep 4 18:08:53.234343 containerd[1450]: 2024-09-04 18:08:53.032 [INFO][4771] ipam.go 155: Attempting to load block cidr=192.168.60.128/26 host="ci-4054-1-0-c-4d101ae770.novalocal" Sep 4 18:08:53.234343 containerd[1450]: 2024-09-04 18:08:53.071 [INFO][4771] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.60.128/26 host="ci-4054-1-0-c-4d101ae770.novalocal" Sep 4 18:08:53.234343 containerd[1450]: 2024-09-04 18:08:53.071 [INFO][4771] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.60.128/26 handle="k8s-pod-network.255b709aa280814bb7d998417809b23fa551838c3d4fa43d303626e0f7598068" host="ci-4054-1-0-c-4d101ae770.novalocal" Sep 4 18:08:53.234343 containerd[1450]: 2024-09-04 18:08:53.077 [INFO][4771] ipam.go 1685: Creating new handle: k8s-pod-network.255b709aa280814bb7d998417809b23fa551838c3d4fa43d303626e0f7598068 Sep 4 18:08:53.234343 containerd[1450]: 2024-09-04 18:08:53.096 [INFO][4771] ipam.go 1203: Writing block in order to claim IPs block=192.168.60.128/26 handle="k8s-pod-network.255b709aa280814bb7d998417809b23fa551838c3d4fa43d303626e0f7598068" host="ci-4054-1-0-c-4d101ae770.novalocal" Sep 4 18:08:53.234343 containerd[1450]: 2024-09-04 18:08:53.132 [INFO][4771] ipam.go 1216: Successfully claimed IPs: [192.168.60.134/26] block=192.168.60.128/26 handle="k8s-pod-network.255b709aa280814bb7d998417809b23fa551838c3d4fa43d303626e0f7598068" host="ci-4054-1-0-c-4d101ae770.novalocal" Sep 4 18:08:53.234343 containerd[1450]: 2024-09-04 18:08:53.132 [INFO][4771] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.60.134/26] handle="k8s-pod-network.255b709aa280814bb7d998417809b23fa551838c3d4fa43d303626e0f7598068" host="ci-4054-1-0-c-4d101ae770.novalocal" Sep 4 18:08:53.234343 containerd[1450]: 2024-09-04 18:08:53.132 [INFO][4771] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 18:08:53.234343 containerd[1450]: 2024-09-04 18:08:53.132 [INFO][4771] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.60.134/26] IPv6=[] ContainerID="255b709aa280814bb7d998417809b23fa551838c3d4fa43d303626e0f7598068" HandleID="k8s-pod-network.255b709aa280814bb7d998417809b23fa551838c3d4fa43d303626e0f7598068" Workload="ci--4054--1--0--c--4d101ae770.novalocal-k8s-calico--apiserver--856d45844c--9fcpt-eth0" Sep 4 18:08:53.237215 containerd[1450]: 2024-09-04 18:08:53.153 [INFO][4747] k8s.go 386: Populated endpoint ContainerID="255b709aa280814bb7d998417809b23fa551838c3d4fa43d303626e0f7598068" Namespace="calico-apiserver" Pod="calico-apiserver-856d45844c-9fcpt" WorkloadEndpoint="ci--4054--1--0--c--4d101ae770.novalocal-k8s-calico--apiserver--856d45844c--9fcpt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054--1--0--c--4d101ae770.novalocal-k8s-calico--apiserver--856d45844c--9fcpt-eth0", GenerateName:"calico-apiserver-856d45844c-", Namespace:"calico-apiserver", SelfLink:"", UID:"470631f0-1806-4742-9519-047c0edc7c53", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 18, 8, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"856d45844c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054-1-0-c-4d101ae770.novalocal", ContainerID:"", Pod:"calico-apiserver-856d45844c-9fcpt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.60.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali02cc8df41a4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 18:08:53.237215 containerd[1450]: 2024-09-04 18:08:53.158 [INFO][4747] k8s.go 387: Calico CNI using IPs: [192.168.60.134/32] ContainerID="255b709aa280814bb7d998417809b23fa551838c3d4fa43d303626e0f7598068" Namespace="calico-apiserver" Pod="calico-apiserver-856d45844c-9fcpt" WorkloadEndpoint="ci--4054--1--0--c--4d101ae770.novalocal-k8s-calico--apiserver--856d45844c--9fcpt-eth0" Sep 4 18:08:53.237215 containerd[1450]: 2024-09-04 18:08:53.158 [INFO][4747] dataplane_linux.go 68: Setting the host side veth name to cali02cc8df41a4 ContainerID="255b709aa280814bb7d998417809b23fa551838c3d4fa43d303626e0f7598068" Namespace="calico-apiserver" Pod="calico-apiserver-856d45844c-9fcpt" WorkloadEndpoint="ci--4054--1--0--c--4d101ae770.novalocal-k8s-calico--apiserver--856d45844c--9fcpt-eth0" Sep 4 18:08:53.237215 containerd[1450]: 2024-09-04 18:08:53.176 [INFO][4747] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="255b709aa280814bb7d998417809b23fa551838c3d4fa43d303626e0f7598068" Namespace="calico-apiserver" Pod="calico-apiserver-856d45844c-9fcpt" WorkloadEndpoint="ci--4054--1--0--c--4d101ae770.novalocal-k8s-calico--apiserver--856d45844c--9fcpt-eth0" Sep 4 18:08:53.237215 containerd[1450]: 2024-09-04 18:08:53.181 [INFO][4747] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="255b709aa280814bb7d998417809b23fa551838c3d4fa43d303626e0f7598068" Namespace="calico-apiserver" Pod="calico-apiserver-856d45844c-9fcpt" WorkloadEndpoint="ci--4054--1--0--c--4d101ae770.novalocal-k8s-calico--apiserver--856d45844c--9fcpt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054--1--0--c--4d101ae770.novalocal-k8s-calico--apiserver--856d45844c--9fcpt-eth0", GenerateName:"calico-apiserver-856d45844c-", Namespace:"calico-apiserver", SelfLink:"", UID:"470631f0-1806-4742-9519-047c0edc7c53", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 18, 8, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"856d45844c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054-1-0-c-4d101ae770.novalocal", ContainerID:"255b709aa280814bb7d998417809b23fa551838c3d4fa43d303626e0f7598068", Pod:"calico-apiserver-856d45844c-9fcpt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.60.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali02cc8df41a4", MAC:"fa:3b:31:de:0c:a7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 18:08:53.237215 containerd[1450]: 2024-09-04 18:08:53.228 [INFO][4747] k8s.go 500: Wrote updated endpoint to datastore ContainerID="255b709aa280814bb7d998417809b23fa551838c3d4fa43d303626e0f7598068" Namespace="calico-apiserver" Pod="calico-apiserver-856d45844c-9fcpt" WorkloadEndpoint="ci--4054--1--0--c--4d101ae770.novalocal-k8s-calico--apiserver--856d45844c--9fcpt-eth0" Sep 4 18:08:53.345777 containerd[1450]: 2024-09-04 18:08:53.300 [WARNING][4867] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fb316156dd2462ed8f82bf2f36c041260ff7a11c0afdcf57f7c6b7d5ca5642bf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054--1--0--c--4d101ae770.novalocal-k8s-coredns--76f75df574--pm852-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"a7bc586f-0539-46c3-a4a7-5297cb2ca3b1", ResourceVersion:"755", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 18, 8, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054-1-0-c-4d101ae770.novalocal", ContainerID:"717766c115383bae73ca9762e95d63f0a3b3e890d8af3e467fd3cf2ab31525b0", Pod:"coredns-76f75df574-pm852", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.60.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7928cb9bce0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 18:08:53.345777 containerd[1450]: 2024-09-04 18:08:53.301 [INFO][4867] k8s.go 608: Cleaning up netns ContainerID="fb316156dd2462ed8f82bf2f36c041260ff7a11c0afdcf57f7c6b7d5ca5642bf" Sep 4 18:08:53.345777 containerd[1450]: 2024-09-04 18:08:53.301 [INFO][4867] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="fb316156dd2462ed8f82bf2f36c041260ff7a11c0afdcf57f7c6b7d5ca5642bf" iface="eth0" netns="" Sep 4 18:08:53.345777 containerd[1450]: 2024-09-04 18:08:53.301 [INFO][4867] k8s.go 615: Releasing IP address(es) ContainerID="fb316156dd2462ed8f82bf2f36c041260ff7a11c0afdcf57f7c6b7d5ca5642bf" Sep 4 18:08:53.345777 containerd[1450]: 2024-09-04 18:08:53.301 [INFO][4867] utils.go 188: Calico CNI releasing IP address ContainerID="fb316156dd2462ed8f82bf2f36c041260ff7a11c0afdcf57f7c6b7d5ca5642bf" Sep 4 18:08:53.345777 containerd[1450]: 2024-09-04 18:08:53.331 [INFO][4882] ipam_plugin.go 417: Releasing address using handleID ContainerID="fb316156dd2462ed8f82bf2f36c041260ff7a11c0afdcf57f7c6b7d5ca5642bf" HandleID="k8s-pod-network.fb316156dd2462ed8f82bf2f36c041260ff7a11c0afdcf57f7c6b7d5ca5642bf" Workload="ci--4054--1--0--c--4d101ae770.novalocal-k8s-coredns--76f75df574--pm852-eth0" Sep 4 18:08:53.345777 containerd[1450]: 2024-09-04 18:08:53.331 [INFO][4882] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 18:08:53.345777 containerd[1450]: 2024-09-04 18:08:53.331 [INFO][4882] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 18:08:53.345777 containerd[1450]: 2024-09-04 18:08:53.339 [WARNING][4882] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="fb316156dd2462ed8f82bf2f36c041260ff7a11c0afdcf57f7c6b7d5ca5642bf" HandleID="k8s-pod-network.fb316156dd2462ed8f82bf2f36c041260ff7a11c0afdcf57f7c6b7d5ca5642bf" Workload="ci--4054--1--0--c--4d101ae770.novalocal-k8s-coredns--76f75df574--pm852-eth0" Sep 4 18:08:53.345777 containerd[1450]: 2024-09-04 18:08:53.339 [INFO][4882] ipam_plugin.go 445: Releasing address using workloadID ContainerID="fb316156dd2462ed8f82bf2f36c041260ff7a11c0afdcf57f7c6b7d5ca5642bf" HandleID="k8s-pod-network.fb316156dd2462ed8f82bf2f36c041260ff7a11c0afdcf57f7c6b7d5ca5642bf" Workload="ci--4054--1--0--c--4d101ae770.novalocal-k8s-coredns--76f75df574--pm852-eth0" Sep 4 18:08:53.345777 containerd[1450]: 2024-09-04 18:08:53.341 [INFO][4882] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 18:08:53.345777 containerd[1450]: 2024-09-04 18:08:53.342 [INFO][4867] k8s.go 621: Teardown processing complete. ContainerID="fb316156dd2462ed8f82bf2f36c041260ff7a11c0afdcf57f7c6b7d5ca5642bf" Sep 4 18:08:53.346758 containerd[1450]: time="2024-09-04T18:08:53.346197563Z" level=info msg="TearDown network for sandbox \"fb316156dd2462ed8f82bf2f36c041260ff7a11c0afdcf57f7c6b7d5ca5642bf\" successfully" Sep 4 18:08:53.372446 containerd[1450]: time="2024-09-04T18:08:53.372196691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-856d45844c-5k86d,Uid:778b8fbc-b656-4e94-8a56-5e7fcabc7750,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"18a8f19e84446f61f96687cece39d37d59b9f766b54cc2433541816c5e0df6db\"" Sep 4 18:08:53.377721 containerd[1450]: time="2024-09-04T18:08:53.377693817Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Sep 4 18:08:53.390076 containerd[1450]: time="2024-09-04T18:08:53.389085729Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fb316156dd2462ed8f82bf2f36c041260ff7a11c0afdcf57f7c6b7d5ca5642bf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 18:08:53.390076 containerd[1450]: time="2024-09-04T18:08:53.389207137Z" level=info msg="RemovePodSandbox \"fb316156dd2462ed8f82bf2f36c041260ff7a11c0afdcf57f7c6b7d5ca5642bf\" returns successfully" Sep 4 18:08:53.391353 containerd[1450]: time="2024-09-04T18:08:53.390982990Z" level=info msg="StopPodSandbox for \"c8ffe77d2f8253aeae4bac52c7ca1e81a5043bea603b1c69fdbc3739bfb02850\"" Sep 4 18:08:53.403550 containerd[1450]: time="2024-09-04T18:08:53.401497785Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 18:08:53.403550 containerd[1450]: time="2024-09-04T18:08:53.401582494Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 18:08:53.403550 containerd[1450]: time="2024-09-04T18:08:53.401598854Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 18:08:53.403550 containerd[1450]: time="2024-09-04T18:08:53.403491196Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 18:08:53.446183 systemd[1]: run-containerd-runc-k8s.io-255b709aa280814bb7d998417809b23fa551838c3d4fa43d303626e0f7598068-runc.Ascbsh.mount: Deactivated successfully. Sep 4 18:08:53.457816 systemd[1]: Started cri-containerd-255b709aa280814bb7d998417809b23fa551838c3d4fa43d303626e0f7598068.scope - libcontainer container 255b709aa280814bb7d998417809b23fa551838c3d4fa43d303626e0f7598068. Sep 4 18:08:53.510710 containerd[1450]: time="2024-09-04T18:08:53.510675278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-856d45844c-9fcpt,Uid:470631f0-1806-4742-9519-047c0edc7c53,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"255b709aa280814bb7d998417809b23fa551838c3d4fa43d303626e0f7598068\"" Sep 4 18:08:53.533935 containerd[1450]: 2024-09-04 18:08:53.483 [WARNING][4925] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c8ffe77d2f8253aeae4bac52c7ca1e81a5043bea603b1c69fdbc3739bfb02850" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054--1--0--c--4d101ae770.novalocal-k8s-calico--kube--controllers--7785d6cbd7--wkdsv-eth0", GenerateName:"calico-kube-controllers-7785d6cbd7-", Namespace:"calico-system", SelfLink:"", UID:"34cca147-c799-40d7-a541-5d0353aa3f8d", ResourceVersion:"790", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 18, 8, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7785d6cbd7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054-1-0-c-4d101ae770.novalocal", ContainerID:"4d779a51d8a8325ea4df9a3bf898b1440fdbd16dadeb869aa527186fc1d49477", Pod:"calico-kube-controllers-7785d6cbd7-wkdsv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.60.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib52a2d84a54", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 18:08:53.533935 containerd[1450]: 2024-09-04 18:08:53.483 [INFO][4925] k8s.go 608: Cleaning up netns ContainerID="c8ffe77d2f8253aeae4bac52c7ca1e81a5043bea603b1c69fdbc3739bfb02850" Sep 4 18:08:53.533935 containerd[1450]: 2024-09-04 18:08:53.483 [INFO][4925] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="c8ffe77d2f8253aeae4bac52c7ca1e81a5043bea603b1c69fdbc3739bfb02850" iface="eth0" netns="" Sep 4 18:08:53.533935 containerd[1450]: 2024-09-04 18:08:53.483 [INFO][4925] k8s.go 615: Releasing IP address(es) ContainerID="c8ffe77d2f8253aeae4bac52c7ca1e81a5043bea603b1c69fdbc3739bfb02850" Sep 4 18:08:53.533935 containerd[1450]: 2024-09-04 18:08:53.483 [INFO][4925] utils.go 188: Calico CNI releasing IP address ContainerID="c8ffe77d2f8253aeae4bac52c7ca1e81a5043bea603b1c69fdbc3739bfb02850" Sep 4 18:08:53.533935 containerd[1450]: 2024-09-04 18:08:53.521 [INFO][4945] ipam_plugin.go 417: Releasing address using handleID ContainerID="c8ffe77d2f8253aeae4bac52c7ca1e81a5043bea603b1c69fdbc3739bfb02850" HandleID="k8s-pod-network.c8ffe77d2f8253aeae4bac52c7ca1e81a5043bea603b1c69fdbc3739bfb02850" Workload="ci--4054--1--0--c--4d101ae770.novalocal-k8s-calico--kube--controllers--7785d6cbd7--wkdsv-eth0" Sep 4 18:08:53.533935 containerd[1450]: 2024-09-04 18:08:53.521 [INFO][4945] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 18:08:53.533935 containerd[1450]: 2024-09-04 18:08:53.521 [INFO][4945] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 18:08:53.533935 containerd[1450]: 2024-09-04 18:08:53.529 [WARNING][4945] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="c8ffe77d2f8253aeae4bac52c7ca1e81a5043bea603b1c69fdbc3739bfb02850" HandleID="k8s-pod-network.c8ffe77d2f8253aeae4bac52c7ca1e81a5043bea603b1c69fdbc3739bfb02850" Workload="ci--4054--1--0--c--4d101ae770.novalocal-k8s-calico--kube--controllers--7785d6cbd7--wkdsv-eth0" Sep 4 18:08:53.533935 containerd[1450]: 2024-09-04 18:08:53.529 [INFO][4945] ipam_plugin.go 445: Releasing address using workloadID ContainerID="c8ffe77d2f8253aeae4bac52c7ca1e81a5043bea603b1c69fdbc3739bfb02850" HandleID="k8s-pod-network.c8ffe77d2f8253aeae4bac52c7ca1e81a5043bea603b1c69fdbc3739bfb02850" Workload="ci--4054--1--0--c--4d101ae770.novalocal-k8s-calico--kube--controllers--7785d6cbd7--wkdsv-eth0" Sep 4 18:08:53.533935 containerd[1450]: 2024-09-04 18:08:53.530 [INFO][4945] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 18:08:53.533935 containerd[1450]: 2024-09-04 18:08:53.532 [INFO][4925] k8s.go 621: Teardown processing complete. ContainerID="c8ffe77d2f8253aeae4bac52c7ca1e81a5043bea603b1c69fdbc3739bfb02850" Sep 4 18:08:53.533935 containerd[1450]: time="2024-09-04T18:08:53.533864721Z" level=info msg="TearDown network for sandbox \"c8ffe77d2f8253aeae4bac52c7ca1e81a5043bea603b1c69fdbc3739bfb02850\" successfully" Sep 4 18:08:53.533935 containerd[1450]: time="2024-09-04T18:08:53.533889007Z" level=info msg="StopPodSandbox for \"c8ffe77d2f8253aeae4bac52c7ca1e81a5043bea603b1c69fdbc3739bfb02850\" returns successfully" Sep 4 18:08:53.535539 containerd[1450]: time="2024-09-04T18:08:53.534916154Z" level=info msg="RemovePodSandbox for \"c8ffe77d2f8253aeae4bac52c7ca1e81a5043bea603b1c69fdbc3739bfb02850\"" Sep 4 18:08:53.535539 containerd[1450]: time="2024-09-04T18:08:53.534943896Z" level=info msg="Forcibly stopping sandbox \"c8ffe77d2f8253aeae4bac52c7ca1e81a5043bea603b1c69fdbc3739bfb02850\"" Sep 4 18:08:53.616017 containerd[1450]: 2024-09-04 18:08:53.574 [WARNING][4970] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c8ffe77d2f8253aeae4bac52c7ca1e81a5043bea603b1c69fdbc3739bfb02850" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054--1--0--c--4d101ae770.novalocal-k8s-calico--kube--controllers--7785d6cbd7--wkdsv-eth0", GenerateName:"calico-kube-controllers-7785d6cbd7-", Namespace:"calico-system", SelfLink:"", UID:"34cca147-c799-40d7-a541-5d0353aa3f8d", ResourceVersion:"790", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 18, 8, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7785d6cbd7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054-1-0-c-4d101ae770.novalocal", ContainerID:"4d779a51d8a8325ea4df9a3bf898b1440fdbd16dadeb869aa527186fc1d49477", Pod:"calico-kube-controllers-7785d6cbd7-wkdsv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.60.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib52a2d84a54", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 18:08:53.616017 containerd[1450]: 2024-09-04 18:08:53.575 [INFO][4970] k8s.go 608: Cleaning up netns ContainerID="c8ffe77d2f8253aeae4bac52c7ca1e81a5043bea603b1c69fdbc3739bfb02850" Sep 4 18:08:53.616017 containerd[1450]: 2024-09-04 18:08:53.575 [INFO][4970] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="c8ffe77d2f8253aeae4bac52c7ca1e81a5043bea603b1c69fdbc3739bfb02850" iface="eth0" netns="" Sep 4 18:08:53.616017 containerd[1450]: 2024-09-04 18:08:53.575 [INFO][4970] k8s.go 615: Releasing IP address(es) ContainerID="c8ffe77d2f8253aeae4bac52c7ca1e81a5043bea603b1c69fdbc3739bfb02850" Sep 4 18:08:53.616017 containerd[1450]: 2024-09-04 18:08:53.575 [INFO][4970] utils.go 188: Calico CNI releasing IP address ContainerID="c8ffe77d2f8253aeae4bac52c7ca1e81a5043bea603b1c69fdbc3739bfb02850" Sep 4 18:08:53.616017 containerd[1450]: 2024-09-04 18:08:53.601 [INFO][4976] ipam_plugin.go 417: Releasing address using handleID ContainerID="c8ffe77d2f8253aeae4bac52c7ca1e81a5043bea603b1c69fdbc3739bfb02850" HandleID="k8s-pod-network.c8ffe77d2f8253aeae4bac52c7ca1e81a5043bea603b1c69fdbc3739bfb02850" Workload="ci--4054--1--0--c--4d101ae770.novalocal-k8s-calico--kube--controllers--7785d6cbd7--wkdsv-eth0" Sep 4 18:08:53.616017 containerd[1450]: 2024-09-04 18:08:53.601 [INFO][4976] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 18:08:53.616017 containerd[1450]: 2024-09-04 18:08:53.601 [INFO][4976] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 18:08:53.616017 containerd[1450]: 2024-09-04 18:08:53.609 [WARNING][4976] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="c8ffe77d2f8253aeae4bac52c7ca1e81a5043bea603b1c69fdbc3739bfb02850" HandleID="k8s-pod-network.c8ffe77d2f8253aeae4bac52c7ca1e81a5043bea603b1c69fdbc3739bfb02850" Workload="ci--4054--1--0--c--4d101ae770.novalocal-k8s-calico--kube--controllers--7785d6cbd7--wkdsv-eth0" Sep 4 18:08:53.616017 containerd[1450]: 2024-09-04 18:08:53.609 [INFO][4976] ipam_plugin.go 445: Releasing address using workloadID ContainerID="c8ffe77d2f8253aeae4bac52c7ca1e81a5043bea603b1c69fdbc3739bfb02850" HandleID="k8s-pod-network.c8ffe77d2f8253aeae4bac52c7ca1e81a5043bea603b1c69fdbc3739bfb02850" Workload="ci--4054--1--0--c--4d101ae770.novalocal-k8s-calico--kube--controllers--7785d6cbd7--wkdsv-eth0" Sep 4 18:08:53.616017 containerd[1450]: 2024-09-04 18:08:53.612 [INFO][4976] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 18:08:53.616017 containerd[1450]: 2024-09-04 18:08:53.613 [INFO][4970] k8s.go 621: Teardown processing complete. ContainerID="c8ffe77d2f8253aeae4bac52c7ca1e81a5043bea603b1c69fdbc3739bfb02850" Sep 4 18:08:53.616017 containerd[1450]: time="2024-09-04T18:08:53.615891726Z" level=info msg="TearDown network for sandbox \"c8ffe77d2f8253aeae4bac52c7ca1e81a5043bea603b1c69fdbc3739bfb02850\" successfully" Sep 4 18:08:53.620510 containerd[1450]: time="2024-09-04T18:08:53.620474958Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c8ffe77d2f8253aeae4bac52c7ca1e81a5043bea603b1c69fdbc3739bfb02850\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 18:08:53.620589 containerd[1450]: time="2024-09-04T18:08:53.620533648Z" level=info msg="RemovePodSandbox \"c8ffe77d2f8253aeae4bac52c7ca1e81a5043bea603b1c69fdbc3739bfb02850\" returns successfully" Sep 4 18:08:53.621016 containerd[1450]: time="2024-09-04T18:08:53.620992179Z" level=info msg="StopPodSandbox for \"991bfe5e3832bfa15ebaa43599e1810058343de6981695890994ba177d479285\"" Sep 4 18:08:53.699090 containerd[1450]: 2024-09-04 18:08:53.660 [WARNING][4994] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="991bfe5e3832bfa15ebaa43599e1810058343de6981695890994ba177d479285" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054--1--0--c--4d101ae770.novalocal-k8s-csi--node--driver--7r577-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"cd6b6d64-badf-4e22-9ca4-6086c67f1ef2", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 18, 8, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054-1-0-c-4d101ae770.novalocal", ContainerID:"a4b5ce06179967b9d76649d07088b9162f135faa9b04dfcf5b6112c224a6b54a", Pod:"csi-node-driver-7r577", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.60.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali8ac2c5576f2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 18:08:53.699090 containerd[1450]: 2024-09-04 18:08:53.660 [INFO][4994] k8s.go 608: Cleaning up netns ContainerID="991bfe5e3832bfa15ebaa43599e1810058343de6981695890994ba177d479285" Sep 4 18:08:53.699090 containerd[1450]: 2024-09-04 18:08:53.660 [INFO][4994] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="991bfe5e3832bfa15ebaa43599e1810058343de6981695890994ba177d479285" iface="eth0" netns="" Sep 4 18:08:53.699090 containerd[1450]: 2024-09-04 18:08:53.660 [INFO][4994] k8s.go 615: Releasing IP address(es) ContainerID="991bfe5e3832bfa15ebaa43599e1810058343de6981695890994ba177d479285" Sep 4 18:08:53.699090 containerd[1450]: 2024-09-04 18:08:53.660 [INFO][4994] utils.go 188: Calico CNI releasing IP address ContainerID="991bfe5e3832bfa15ebaa43599e1810058343de6981695890994ba177d479285" Sep 4 18:08:53.699090 containerd[1450]: 2024-09-04 18:08:53.685 [INFO][5000] ipam_plugin.go 417: Releasing address using handleID ContainerID="991bfe5e3832bfa15ebaa43599e1810058343de6981695890994ba177d479285" HandleID="k8s-pod-network.991bfe5e3832bfa15ebaa43599e1810058343de6981695890994ba177d479285" Workload="ci--4054--1--0--c--4d101ae770.novalocal-k8s-csi--node--driver--7r577-eth0" Sep 4 18:08:53.699090 containerd[1450]: 2024-09-04 18:08:53.685 [INFO][5000] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 18:08:53.699090 containerd[1450]: 2024-09-04 18:08:53.685 [INFO][5000] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 18:08:53.699090 containerd[1450]: 2024-09-04 18:08:53.693 [WARNING][5000] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="991bfe5e3832bfa15ebaa43599e1810058343de6981695890994ba177d479285" HandleID="k8s-pod-network.991bfe5e3832bfa15ebaa43599e1810058343de6981695890994ba177d479285" Workload="ci--4054--1--0--c--4d101ae770.novalocal-k8s-csi--node--driver--7r577-eth0" Sep 4 18:08:53.699090 containerd[1450]: 2024-09-04 18:08:53.694 [INFO][5000] ipam_plugin.go 445: Releasing address using workloadID ContainerID="991bfe5e3832bfa15ebaa43599e1810058343de6981695890994ba177d479285" HandleID="k8s-pod-network.991bfe5e3832bfa15ebaa43599e1810058343de6981695890994ba177d479285" Workload="ci--4054--1--0--c--4d101ae770.novalocal-k8s-csi--node--driver--7r577-eth0" Sep 4 18:08:53.699090 containerd[1450]: 2024-09-04 18:08:53.696 [INFO][5000] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 18:08:53.699090 containerd[1450]: 2024-09-04 18:08:53.697 [INFO][4994] k8s.go 621: Teardown processing complete. ContainerID="991bfe5e3832bfa15ebaa43599e1810058343de6981695890994ba177d479285" Sep 4 18:08:53.699521 containerd[1450]: time="2024-09-04T18:08:53.699147256Z" level=info msg="TearDown network for sandbox \"991bfe5e3832bfa15ebaa43599e1810058343de6981695890994ba177d479285\" successfully" Sep 4 18:08:53.699521 containerd[1450]: time="2024-09-04T18:08:53.699171662Z" level=info msg="StopPodSandbox for \"991bfe5e3832bfa15ebaa43599e1810058343de6981695890994ba177d479285\" returns successfully" Sep 4 18:08:53.699712 containerd[1450]: time="2024-09-04T18:08:53.699649619Z" level=info msg="RemovePodSandbox for \"991bfe5e3832bfa15ebaa43599e1810058343de6981695890994ba177d479285\"" Sep 4 18:08:53.699768 containerd[1450]: time="2024-09-04T18:08:53.699717828Z" level=info msg="Forcibly stopping sandbox \"991bfe5e3832bfa15ebaa43599e1810058343de6981695890994ba177d479285\"" Sep 4 18:08:53.773588 containerd[1450]: 2024-09-04 18:08:53.738 [WARNING][5018] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="991bfe5e3832bfa15ebaa43599e1810058343de6981695890994ba177d479285" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054--1--0--c--4d101ae770.novalocal-k8s-csi--node--driver--7r577-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"cd6b6d64-badf-4e22-9ca4-6086c67f1ef2", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 18, 8, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054-1-0-c-4d101ae770.novalocal", ContainerID:"a4b5ce06179967b9d76649d07088b9162f135faa9b04dfcf5b6112c224a6b54a", Pod:"csi-node-driver-7r577", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.60.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali8ac2c5576f2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 18:08:53.773588 containerd[1450]: 2024-09-04 18:08:53.738 [INFO][5018] k8s.go 608: Cleaning up netns ContainerID="991bfe5e3832bfa15ebaa43599e1810058343de6981695890994ba177d479285" Sep 4 18:08:53.773588 containerd[1450]: 2024-09-04 18:08:53.739 [INFO][5018] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="991bfe5e3832bfa15ebaa43599e1810058343de6981695890994ba177d479285" iface="eth0" netns="" Sep 4 18:08:53.773588 containerd[1450]: 2024-09-04 18:08:53.739 [INFO][5018] k8s.go 615: Releasing IP address(es) ContainerID="991bfe5e3832bfa15ebaa43599e1810058343de6981695890994ba177d479285" Sep 4 18:08:53.773588 containerd[1450]: 2024-09-04 18:08:53.739 [INFO][5018] utils.go 188: Calico CNI releasing IP address ContainerID="991bfe5e3832bfa15ebaa43599e1810058343de6981695890994ba177d479285" Sep 4 18:08:53.773588 containerd[1450]: 2024-09-04 18:08:53.761 [INFO][5025] ipam_plugin.go 417: Releasing address using handleID ContainerID="991bfe5e3832bfa15ebaa43599e1810058343de6981695890994ba177d479285" HandleID="k8s-pod-network.991bfe5e3832bfa15ebaa43599e1810058343de6981695890994ba177d479285" Workload="ci--4054--1--0--c--4d101ae770.novalocal-k8s-csi--node--driver--7r577-eth0" Sep 4 18:08:53.773588 containerd[1450]: 2024-09-04 18:08:53.761 [INFO][5025] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 18:08:53.773588 containerd[1450]: 2024-09-04 18:08:53.761 [INFO][5025] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 18:08:53.773588 containerd[1450]: 2024-09-04 18:08:53.768 [WARNING][5025] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="991bfe5e3832bfa15ebaa43599e1810058343de6981695890994ba177d479285" HandleID="k8s-pod-network.991bfe5e3832bfa15ebaa43599e1810058343de6981695890994ba177d479285" Workload="ci--4054--1--0--c--4d101ae770.novalocal-k8s-csi--node--driver--7r577-eth0" Sep 4 18:08:53.773588 containerd[1450]: 2024-09-04 18:08:53.768 [INFO][5025] ipam_plugin.go 445: Releasing address using workloadID ContainerID="991bfe5e3832bfa15ebaa43599e1810058343de6981695890994ba177d479285" HandleID="k8s-pod-network.991bfe5e3832bfa15ebaa43599e1810058343de6981695890994ba177d479285" Workload="ci--4054--1--0--c--4d101ae770.novalocal-k8s-csi--node--driver--7r577-eth0" Sep 4 18:08:53.773588 containerd[1450]: 2024-09-04 18:08:53.770 [INFO][5025] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 18:08:53.773588 containerd[1450]: 2024-09-04 18:08:53.772 [INFO][5018] k8s.go 621: Teardown processing complete. ContainerID="991bfe5e3832bfa15ebaa43599e1810058343de6981695890994ba177d479285" Sep 4 18:08:53.775494 containerd[1450]: time="2024-09-04T18:08:53.773618910Z" level=info msg="TearDown network for sandbox \"991bfe5e3832bfa15ebaa43599e1810058343de6981695890994ba177d479285\" successfully" Sep 4 18:08:53.788766 containerd[1450]: time="2024-09-04T18:08:53.788554082Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"991bfe5e3832bfa15ebaa43599e1810058343de6981695890994ba177d479285\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 18:08:53.788766 containerd[1450]: time="2024-09-04T18:08:53.788630776Z" level=info msg="RemovePodSandbox \"991bfe5e3832bfa15ebaa43599e1810058343de6981695890994ba177d479285\" returns successfully" Sep 4 18:08:53.793222 containerd[1450]: time="2024-09-04T18:08:53.793173982Z" level=info msg="StopPodSandbox for \"ae23639b8808a05e6b75b18adb0b97a889dd3f4d640ae43387800929b5896d36\"" Sep 4 18:08:53.793380 containerd[1450]: time="2024-09-04T18:08:53.793300711Z" level=info msg="TearDown network for sandbox \"ae23639b8808a05e6b75b18adb0b97a889dd3f4d640ae43387800929b5896d36\" successfully" Sep 4 18:08:53.793380 containerd[1450]: time="2024-09-04T18:08:53.793341858Z" level=info msg="StopPodSandbox for \"ae23639b8808a05e6b75b18adb0b97a889dd3f4d640ae43387800929b5896d36\" returns successfully" Sep 4 18:08:53.794339 containerd[1450]: time="2024-09-04T18:08:53.793800709Z" level=info msg="RemovePodSandbox for \"ae23639b8808a05e6b75b18adb0b97a889dd3f4d640ae43387800929b5896d36\"" Sep 4 18:08:53.794339 containerd[1450]: time="2024-09-04T18:08:53.793835173Z" level=info msg="Forcibly stopping sandbox \"ae23639b8808a05e6b75b18adb0b97a889dd3f4d640ae43387800929b5896d36\"" Sep 4 18:08:53.794339 containerd[1450]: time="2024-09-04T18:08:53.793902279Z" level=info msg="TearDown network for sandbox \"ae23639b8808a05e6b75b18adb0b97a889dd3f4d640ae43387800929b5896d36\" successfully" Sep 4 18:08:53.803677 containerd[1450]: time="2024-09-04T18:08:53.801564941Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ae23639b8808a05e6b75b18adb0b97a889dd3f4d640ae43387800929b5896d36\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 18:08:53.803677 containerd[1450]: time="2024-09-04T18:08:53.801668776Z" level=info msg="RemovePodSandbox \"ae23639b8808a05e6b75b18adb0b97a889dd3f4d640ae43387800929b5896d36\" returns successfully" Sep 4 18:08:54.164851 systemd-networkd[1346]: cali7b634f248de: Gained IPv6LL Sep 4 18:08:55.187892 systemd-networkd[1346]: cali02cc8df41a4: Gained IPv6LL Sep 4 18:08:57.154140 containerd[1450]: time="2024-09-04T18:08:57.152891227Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 18:08:57.156346 containerd[1450]: time="2024-09-04T18:08:57.156288162Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=40419849" Sep 4 18:08:57.157848 containerd[1450]: time="2024-09-04T18:08:57.157796783Z" level=info msg="ImageCreate event name:\"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 18:08:57.161968 containerd[1450]: time="2024-09-04T18:08:57.161916736Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 18:08:57.162946 containerd[1450]: time="2024-09-04T18:08:57.162902726Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"41912266\" in 3.785052445s" Sep 4 18:08:57.163004 containerd[1450]: time="2024-09-04T18:08:57.162947150Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\"" Sep 4 18:08:57.165828 containerd[1450]: time="2024-09-04T18:08:57.165156957Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Sep 4 18:08:57.166810 containerd[1450]: time="2024-09-04T18:08:57.166785644Z" level=info msg="CreateContainer within sandbox \"18a8f19e84446f61f96687cece39d37d59b9f766b54cc2433541816c5e0df6db\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 4 18:08:57.187239 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount994857358.mount: Deactivated successfully. Sep 4 18:08:57.191808 containerd[1450]: time="2024-09-04T18:08:57.191755385Z" level=info msg="CreateContainer within sandbox \"18a8f19e84446f61f96687cece39d37d59b9f766b54cc2433541816c5e0df6db\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c45315f6b9d3e4b3bc525360ffec408d2d54cde65aa2ad821cce65b546eb89b0\"" Sep 4 18:08:57.193747 containerd[1450]: time="2024-09-04T18:08:57.192596825Z" level=info msg="StartContainer for \"c45315f6b9d3e4b3bc525360ffec408d2d54cde65aa2ad821cce65b546eb89b0\"" Sep 4 18:08:57.236907 systemd[1]: Started cri-containerd-c45315f6b9d3e4b3bc525360ffec408d2d54cde65aa2ad821cce65b546eb89b0.scope - libcontainer container c45315f6b9d3e4b3bc525360ffec408d2d54cde65aa2ad821cce65b546eb89b0. Sep 4 18:08:57.312389 containerd[1450]: time="2024-09-04T18:08:57.312335164Z" level=info msg="StartContainer for \"c45315f6b9d3e4b3bc525360ffec408d2d54cde65aa2ad821cce65b546eb89b0\" returns successfully" Sep 4 18:08:57.568730 containerd[1450]: time="2024-09-04T18:08:57.567733075Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 18:08:57.569380 containerd[1450]: time="2024-09-04T18:08:57.569342365Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=77" Sep 4 18:08:57.572649 containerd[1450]: time="2024-09-04T18:08:57.572603565Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"41912266\" in 406.620347ms" Sep 4 18:08:57.572649 containerd[1450]: time="2024-09-04T18:08:57.572697992Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\"" Sep 4 18:08:57.576616 containerd[1450]: time="2024-09-04T18:08:57.576434485Z" level=info msg="CreateContainer within sandbox \"255b709aa280814bb7d998417809b23fa551838c3d4fa43d303626e0f7598068\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 4 18:08:57.604905 containerd[1450]: time="2024-09-04T18:08:57.604853080Z" level=info msg="CreateContainer within sandbox \"255b709aa280814bb7d998417809b23fa551838c3d4fa43d303626e0f7598068\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"7c7a7076bde1c649d629c60d7c7a88a01936fa3d0ca095d6809a993dd194a0e9\"" Sep 4 18:08:57.605790 containerd[1450]: time="2024-09-04T18:08:57.605735987Z" level=info msg="StartContainer for \"7c7a7076bde1c649d629c60d7c7a88a01936fa3d0ca095d6809a993dd194a0e9\"" Sep 4 18:08:57.642922 systemd[1]: Started cri-containerd-7c7a7076bde1c649d629c60d7c7a88a01936fa3d0ca095d6809a993dd194a0e9.scope - libcontainer container 7c7a7076bde1c649d629c60d7c7a88a01936fa3d0ca095d6809a993dd194a0e9. Sep 4 18:08:57.709947 containerd[1450]: time="2024-09-04T18:08:57.709885310Z" level=info msg="StartContainer for \"7c7a7076bde1c649d629c60d7c7a88a01936fa3d0ca095d6809a993dd194a0e9\" returns successfully" Sep 4 18:08:58.182745 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount290751211.mount: Deactivated successfully. Sep 4 18:08:58.224272 kubelet[2664]: I0904 18:08:58.224217 2664 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-856d45844c-9fcpt" podStartSLOduration=3.16390689 podStartE2EDuration="7.224151007s" podCreationTimestamp="2024-09-04 18:08:51 +0000 UTC" firstStartedPulling="2024-09-04 18:08:53.512920822 +0000 UTC m=+61.248705668" lastFinishedPulling="2024-09-04 18:08:57.573164929 +0000 UTC m=+65.308949785" observedRunningTime="2024-09-04 18:08:58.207338614 +0000 UTC m=+65.943123470" watchObservedRunningTime="2024-09-04 18:08:58.224151007 +0000 UTC m=+65.959935863" Sep 4 18:08:58.259797 kubelet[2664]: I0904 18:08:58.259752 2664 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-856d45844c-5k86d" podStartSLOduration=3.473785341 podStartE2EDuration="7.259637015s" podCreationTimestamp="2024-09-04 18:08:51 +0000 UTC" firstStartedPulling="2024-09-04 18:08:53.377409795 +0000 UTC m=+61.113194651" lastFinishedPulling="2024-09-04 18:08:57.163261469 +0000 UTC m=+64.899046325" observedRunningTime="2024-09-04 18:08:58.22609792 +0000 UTC m=+65.961882776" watchObservedRunningTime="2024-09-04 18:08:58.259637015 +0000 UTC m=+65.995421871" Sep 4 18:09:09.352746 systemd[1]: Started sshd@9-172.24.4.134:22-172.24.4.1:60664.service - OpenSSH per-connection server daemon (172.24.4.1:60664). Sep 4 18:09:10.626246 sshd[5158]: Accepted publickey for core from 172.24.4.1 port 60664 ssh2: RSA SHA256:JnA7Fh8lVkr6ENifNOXj431OPLJBOL+/PI8dMas4Eok Sep 4 18:09:10.635077 sshd[5158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 18:09:10.654553 systemd-logind[1430]: New session 12 of user core. Sep 4 18:09:10.660093 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 4 18:09:12.484934 sshd[5158]: pam_unix(sshd:session): session closed for user core Sep 4 18:09:12.491396 systemd[1]: sshd@9-172.24.4.134:22-172.24.4.1:60664.service: Deactivated successfully. Sep 4 18:09:12.496902 systemd[1]: session-12.scope: Deactivated successfully. Sep 4 18:09:12.498399 systemd-logind[1430]: Session 12 logged out. Waiting for processes to exit. Sep 4 18:09:12.500837 systemd-logind[1430]: Removed session 12. Sep 4 18:09:17.519691 systemd[1]: Started sshd@10-172.24.4.134:22-172.24.4.1:54338.service - OpenSSH per-connection server daemon (172.24.4.1:54338). Sep 4 18:09:18.972635 sshd[5203]: Accepted publickey for core from 172.24.4.1 port 54338 ssh2: RSA SHA256:JnA7Fh8lVkr6ENifNOXj431OPLJBOL+/PI8dMas4Eok Sep 4 18:09:18.981208 sshd[5203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 18:09:19.002143 systemd-logind[1430]: New session 13 of user core. Sep 4 18:09:19.008168 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 4 18:09:20.311831 sshd[5203]: pam_unix(sshd:session): session closed for user core Sep 4 18:09:20.317322 systemd[1]: sshd@10-172.24.4.134:22-172.24.4.1:54338.service: Deactivated successfully. Sep 4 18:09:20.320596 systemd[1]: session-13.scope: Deactivated successfully. Sep 4 18:09:20.322272 systemd-logind[1430]: Session 13 logged out. Waiting for processes to exit. Sep 4 18:09:20.324999 systemd-logind[1430]: Removed session 13. Sep 4 18:09:25.335393 systemd[1]: Started sshd@11-172.24.4.134:22-172.24.4.1:40822.service - OpenSSH per-connection server daemon (172.24.4.1:40822). Sep 4 18:09:26.739815 sshd[5229]: Accepted publickey for core from 172.24.4.1 port 40822 ssh2: RSA SHA256:JnA7Fh8lVkr6ENifNOXj431OPLJBOL+/PI8dMas4Eok Sep 4 18:09:26.744584 sshd[5229]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 18:09:26.754151 systemd-logind[1430]: New session 14 of user core. Sep 4 18:09:26.761010 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 4 18:09:27.592026 sshd[5229]: pam_unix(sshd:session): session closed for user core Sep 4 18:09:27.604140 systemd[1]: sshd@11-172.24.4.134:22-172.24.4.1:40822.service: Deactivated successfully. Sep 4 18:09:27.608360 systemd[1]: session-14.scope: Deactivated successfully. Sep 4 18:09:27.612496 systemd-logind[1430]: Session 14 logged out. Waiting for processes to exit. Sep 4 18:09:27.619935 systemd[1]: Started sshd@12-172.24.4.134:22-172.24.4.1:40828.service - OpenSSH per-connection server daemon (172.24.4.1:40828). Sep 4 18:09:27.623757 systemd-logind[1430]: Removed session 14. Sep 4 18:09:29.182997 sshd[5243]: Accepted publickey for core from 172.24.4.1 port 40828 ssh2: RSA SHA256:JnA7Fh8lVkr6ENifNOXj431OPLJBOL+/PI8dMas4Eok Sep 4 18:09:29.185940 sshd[5243]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 18:09:29.196724 systemd-logind[1430]: New session 15 of user core. Sep 4 18:09:29.202952 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 4 18:09:30.437333 sshd[5243]: pam_unix(sshd:session): session closed for user core Sep 4 18:09:30.445427 systemd[1]: sshd@12-172.24.4.134:22-172.24.4.1:40828.service: Deactivated successfully. Sep 4 18:09:30.447939 systemd[1]: session-15.scope: Deactivated successfully. Sep 4 18:09:30.451332 systemd-logind[1430]: Session 15 logged out. Waiting for processes to exit. Sep 4 18:09:30.460224 systemd[1]: Started sshd@13-172.24.4.134:22-172.24.4.1:40844.service - OpenSSH per-connection server daemon (172.24.4.1:40844). Sep 4 18:09:30.462695 systemd-logind[1430]: Removed session 15. Sep 4 18:09:31.846638 sshd[5274]: Accepted publickey for core from 172.24.4.1 port 40844 ssh2: RSA SHA256:JnA7Fh8lVkr6ENifNOXj431OPLJBOL+/PI8dMas4Eok Sep 4 18:09:31.849751 sshd[5274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 18:09:31.859961 systemd-logind[1430]: New session 16 of user core. Sep 4 18:09:31.867045 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 4 18:09:32.625400 sshd[5274]: pam_unix(sshd:session): session closed for user core Sep 4 18:09:32.628557 systemd[1]: sshd@13-172.24.4.134:22-172.24.4.1:40844.service: Deactivated successfully. Sep 4 18:09:32.631008 systemd[1]: session-16.scope: Deactivated successfully. Sep 4 18:09:32.633544 systemd-logind[1430]: Session 16 logged out. Waiting for processes to exit. Sep 4 18:09:32.634890 systemd-logind[1430]: Removed session 16. Sep 4 18:09:37.654412 systemd[1]: Started sshd@14-172.24.4.134:22-172.24.4.1:57396.service - OpenSSH per-connection server daemon (172.24.4.1:57396). Sep 4 18:09:38.913988 sshd[5317]: Accepted publickey for core from 172.24.4.1 port 57396 ssh2: RSA SHA256:JnA7Fh8lVkr6ENifNOXj431OPLJBOL+/PI8dMas4Eok Sep 4 18:09:38.917468 sshd[5317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 18:09:38.937096 systemd-logind[1430]: New session 17 of user core. Sep 4 18:09:38.947584 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 4 18:09:39.933782 sshd[5317]: pam_unix(sshd:session): session closed for user core Sep 4 18:09:39.945645 systemd[1]: sshd@14-172.24.4.134:22-172.24.4.1:57396.service: Deactivated successfully. Sep 4 18:09:39.950582 systemd[1]: session-17.scope: Deactivated successfully. Sep 4 18:09:39.952826 systemd-logind[1430]: Session 17 logged out. Waiting for processes to exit. Sep 4 18:09:39.954885 systemd-logind[1430]: Removed session 17. Sep 4 18:09:44.946143 systemd[1]: Started sshd@15-172.24.4.134:22-172.24.4.1:46740.service - OpenSSH per-connection server daemon (172.24.4.1:46740). Sep 4 18:09:46.203297 sshd[5330]: Accepted publickey for core from 172.24.4.1 port 46740 ssh2: RSA SHA256:JnA7Fh8lVkr6ENifNOXj431OPLJBOL+/PI8dMas4Eok Sep 4 18:09:46.207980 sshd[5330]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 18:09:46.214980 systemd-logind[1430]: New session 18 of user core. Sep 4 18:09:46.222100 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 4 18:09:47.147287 sshd[5330]: pam_unix(sshd:session): session closed for user core Sep 4 18:09:47.175533 systemd[1]: sshd@15-172.24.4.134:22-172.24.4.1:46740.service: Deactivated successfully. Sep 4 18:09:47.180477 systemd[1]: session-18.scope: Deactivated successfully. Sep 4 18:09:47.183269 systemd-logind[1430]: Session 18 logged out. Waiting for processes to exit. Sep 4 18:09:47.185327 systemd-logind[1430]: Removed session 18. Sep 4 18:09:52.172388 systemd[1]: Started sshd@16-172.24.4.134:22-172.24.4.1:46744.service - OpenSSH per-connection server daemon (172.24.4.1:46744). Sep 4 18:09:53.507038 sshd[5369]: Accepted publickey for core from 172.24.4.1 port 46744 ssh2: RSA SHA256:JnA7Fh8lVkr6ENifNOXj431OPLJBOL+/PI8dMas4Eok Sep 4 18:09:53.509891 sshd[5369]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 18:09:53.522012 systemd-logind[1430]: New session 19 of user core. Sep 4 18:09:53.527054 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 4 18:09:54.571330 sshd[5369]: pam_unix(sshd:session): session closed for user core Sep 4 18:09:54.583287 systemd[1]: sshd@16-172.24.4.134:22-172.24.4.1:46744.service: Deactivated successfully. Sep 4 18:09:54.588772 systemd[1]: session-19.scope: Deactivated successfully. Sep 4 18:09:54.595841 systemd-logind[1430]: Session 19 logged out. Waiting for processes to exit. Sep 4 18:09:54.605275 systemd[1]: Started sshd@17-172.24.4.134:22-172.24.4.1:52498.service - OpenSSH per-connection server daemon (172.24.4.1:52498). Sep 4 18:09:54.609177 systemd-logind[1430]: Removed session 19. Sep 4 18:09:55.893120 sshd[5385]: Accepted publickey for core from 172.24.4.1 port 52498 ssh2: RSA SHA256:JnA7Fh8lVkr6ENifNOXj431OPLJBOL+/PI8dMas4Eok Sep 4 18:09:55.895946 sshd[5385]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 18:09:55.906519 systemd-logind[1430]: New session 20 of user core. Sep 4 18:09:55.912996 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 4 18:09:57.619550 sshd[5385]: pam_unix(sshd:session): session closed for user core Sep 4 18:09:57.629329 systemd[1]: Started sshd@18-172.24.4.134:22-172.24.4.1:52500.service - OpenSSH per-connection server daemon (172.24.4.1:52500). Sep 4 18:09:57.638221 systemd[1]: sshd@17-172.24.4.134:22-172.24.4.1:52498.service: Deactivated successfully. Sep 4 18:09:57.644519 systemd[1]: session-20.scope: Deactivated successfully. Sep 4 18:09:57.646910 systemd-logind[1430]: Session 20 logged out. Waiting for processes to exit. Sep 4 18:09:57.648647 systemd-logind[1430]: Removed session 20. Sep 4 18:09:58.950816 sshd[5399]: Accepted publickey for core from 172.24.4.1 port 52500 ssh2: RSA SHA256:JnA7Fh8lVkr6ENifNOXj431OPLJBOL+/PI8dMas4Eok Sep 4 18:09:58.956170 sshd[5399]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 18:09:58.966173 systemd-logind[1430]: New session 21 of user core. Sep 4 18:09:58.977268 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 4 18:10:02.715020 sshd[5399]: pam_unix(sshd:session): session closed for user core Sep 4 18:10:02.730371 systemd[1]: Started sshd@19-172.24.4.134:22-172.24.4.1:52506.service - OpenSSH per-connection server daemon (172.24.4.1:52506). Sep 4 18:10:02.754411 systemd[1]: sshd@18-172.24.4.134:22-172.24.4.1:52500.service: Deactivated successfully. Sep 4 18:10:02.762059 systemd[1]: session-21.scope: Deactivated successfully. Sep 4 18:10:02.767573 systemd-logind[1430]: Session 21 logged out. Waiting for processes to exit. Sep 4 18:10:02.772106 systemd-logind[1430]: Removed session 21. Sep 4 18:10:04.283117 sshd[5442]: Accepted publickey for core from 172.24.4.1 port 52506 ssh2: RSA SHA256:JnA7Fh8lVkr6ENifNOXj431OPLJBOL+/PI8dMas4Eok Sep 4 18:10:04.288437 sshd[5442]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 18:10:04.302171 systemd-logind[1430]: New session 22 of user core. Sep 4 18:10:04.308980 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 4 18:10:07.499327 sshd[5442]: pam_unix(sshd:session): session closed for user core Sep 4 18:10:07.513081 systemd[1]: sshd@19-172.24.4.134:22-172.24.4.1:52506.service: Deactivated successfully. Sep 4 18:10:07.516120 systemd[1]: session-22.scope: Deactivated successfully. Sep 4 18:10:07.516445 systemd[1]: session-22.scope: Consumed 1.063s CPU time. Sep 4 18:10:07.517600 systemd-logind[1430]: Session 22 logged out. Waiting for processes to exit. Sep 4 18:10:07.527235 systemd[1]: Started sshd@20-172.24.4.134:22-172.24.4.1:36782.service - OpenSSH per-connection server daemon (172.24.4.1:36782). Sep 4 18:10:07.530086 systemd-logind[1430]: Removed session 22. Sep 4 18:10:08.628822 sshd[5462]: Accepted publickey for core from 172.24.4.1 port 36782 ssh2: RSA SHA256:JnA7Fh8lVkr6ENifNOXj431OPLJBOL+/PI8dMas4Eok Sep 4 18:10:08.632217 sshd[5462]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 18:10:08.645817 systemd-logind[1430]: New session 23 of user core. Sep 4 18:10:08.653039 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 4 18:10:09.465985 sshd[5462]: pam_unix(sshd:session): session closed for user core Sep 4 18:10:09.471261 systemd[1]: sshd@20-172.24.4.134:22-172.24.4.1:36782.service: Deactivated successfully. Sep 4 18:10:09.474837 systemd[1]: session-23.scope: Deactivated successfully. Sep 4 18:10:09.478555 systemd-logind[1430]: Session 23 logged out. Waiting for processes to exit. Sep 4 18:10:09.481189 systemd-logind[1430]: Removed session 23. Sep 4 18:10:14.489741 systemd[1]: Started sshd@21-172.24.4.134:22-172.24.4.1:36788.service - OpenSSH per-connection server daemon (172.24.4.1:36788). Sep 4 18:10:15.860132 sshd[5480]: Accepted publickey for core from 172.24.4.1 port 36788 ssh2: RSA SHA256:JnA7Fh8lVkr6ENifNOXj431OPLJBOL+/PI8dMas4Eok Sep 4 18:10:15.862274 sshd[5480]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 18:10:15.876862 systemd-logind[1430]: New session 24 of user core. Sep 4 18:10:15.886302 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 4 18:10:16.637900 sshd[5480]: pam_unix(sshd:session): session closed for user core Sep 4 18:10:16.643934 systemd[1]: sshd@21-172.24.4.134:22-172.24.4.1:36788.service: Deactivated successfully. Sep 4 18:10:16.648473 systemd[1]: session-24.scope: Deactivated successfully. Sep 4 18:10:16.649789 systemd-logind[1430]: Session 24 logged out. Waiting for processes to exit. Sep 4 18:10:16.651452 systemd-logind[1430]: Removed session 24. Sep 4 18:10:21.662275 systemd[1]: Started sshd@22-172.24.4.134:22-172.24.4.1:46712.service - OpenSSH per-connection server daemon (172.24.4.1:46712). Sep 4 18:10:23.194847 sshd[5532]: Accepted publickey for core from 172.24.4.1 port 46712 ssh2: RSA SHA256:JnA7Fh8lVkr6ENifNOXj431OPLJBOL+/PI8dMas4Eok Sep 4 18:10:23.199503 sshd[5532]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 18:10:23.211533 systemd-logind[1430]: New session 25 of user core. Sep 4 18:10:23.223046 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 4 18:10:23.864793 sshd[5532]: pam_unix(sshd:session): session closed for user core Sep 4 18:10:23.873113 systemd[1]: sshd@22-172.24.4.134:22-172.24.4.1:46712.service: Deactivated successfully. Sep 4 18:10:23.880723 systemd[1]: session-25.scope: Deactivated successfully. Sep 4 18:10:23.882983 systemd-logind[1430]: Session 25 logged out. Waiting for processes to exit. Sep 4 18:10:23.885593 systemd-logind[1430]: Removed session 25. Sep 4 18:10:28.889271 systemd[1]: Started sshd@23-172.24.4.134:22-172.24.4.1:53446.service - OpenSSH per-connection server daemon (172.24.4.1:53446). Sep 4 18:10:30.234986 sshd[5570]: Accepted publickey for core from 172.24.4.1 port 53446 ssh2: RSA SHA256:JnA7Fh8lVkr6ENifNOXj431OPLJBOL+/PI8dMas4Eok Sep 4 18:10:30.238133 sshd[5570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 18:10:30.250360 systemd-logind[1430]: New session 26 of user core. Sep 4 18:10:30.257095 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 4 18:10:31.108967 sshd[5570]: pam_unix(sshd:session): session closed for user core Sep 4 18:10:31.114827 systemd[1]: sshd@23-172.24.4.134:22-172.24.4.1:53446.service: Deactivated successfully. Sep 4 18:10:31.118800 systemd[1]: session-26.scope: Deactivated successfully. Sep 4 18:10:31.122453 systemd-logind[1430]: Session 26 logged out. Waiting for processes to exit. Sep 4 18:10:31.125504 systemd-logind[1430]: Removed session 26. Sep 4 18:10:36.128496 systemd[1]: Started sshd@24-172.24.4.134:22-172.24.4.1:51042.service - OpenSSH per-connection server daemon (172.24.4.1:51042). Sep 4 18:10:37.531790 sshd[5606]: Accepted publickey for core from 172.24.4.1 port 51042 ssh2: RSA SHA256:JnA7Fh8lVkr6ENifNOXj431OPLJBOL+/PI8dMas4Eok Sep 4 18:10:37.534553 sshd[5606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 18:10:37.546209 systemd-logind[1430]: New session 27 of user core. Sep 4 18:10:37.551978 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 4 18:10:38.381176 sshd[5606]: pam_unix(sshd:session): session closed for user core Sep 4 18:10:38.386416 systemd[1]: sshd@24-172.24.4.134:22-172.24.4.1:51042.service: Deactivated successfully. Sep 4 18:10:38.388370 systemd[1]: session-27.scope: Deactivated successfully. Sep 4 18:10:38.389594 systemd-logind[1430]: Session 27 logged out. Waiting for processes to exit. Sep 4 18:10:38.391134 systemd-logind[1430]: Removed session 27. Sep 4 18:10:43.410462 systemd[1]: Started sshd@25-172.24.4.134:22-172.24.4.1:51054.service - OpenSSH per-connection server daemon (172.24.4.1:51054). Sep 4 18:10:44.673855 sshd[5624]: Accepted publickey for core from 172.24.4.1 port 51054 ssh2: RSA SHA256:JnA7Fh8lVkr6ENifNOXj431OPLJBOL+/PI8dMas4Eok Sep 4 18:10:44.676135 sshd[5624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 18:10:44.687209 systemd-logind[1430]: New session 28 of user core. Sep 4 18:10:44.697027 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 4 18:10:45.862785 sshd[5624]: pam_unix(sshd:session): session closed for user core Sep 4 18:10:45.868198 systemd[1]: sshd@25-172.24.4.134:22-172.24.4.1:51054.service: Deactivated successfully. Sep 4 18:10:45.873419 systemd[1]: session-28.scope: Deactivated successfully. Sep 4 18:10:45.878053 systemd-logind[1430]: Session 28 logged out. Waiting for processes to exit. Sep 4 18:10:45.880779 systemd-logind[1430]: Removed session 28.