Jul 2 00:33:17.005485 kernel: Linux version 6.6.36-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Mon Jul 1 22:47:51 -00 2024 Jul 2 00:33:17.005512 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=7cbbc16c4aaa626caa51ed60a6754ae638f7b2b87370c3f4fc6a9772b7874a8b Jul 2 00:33:17.005525 kernel: BIOS-provided physical RAM map: Jul 2 00:33:17.005532 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 2 00:33:17.005539 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 2 00:33:17.005547 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 2 00:33:17.005555 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Jul 2 00:33:17.005562 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Jul 2 00:33:17.005570 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 2 00:33:17.005579 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 2 00:33:17.005587 kernel: NX (Execute Disable) protection: active Jul 2 00:33:17.005594 kernel: APIC: Static calls initialized Jul 2 00:33:17.005601 kernel: SMBIOS 2.8 present. Jul 2 00:33:17.005609 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014 Jul 2 00:33:17.005618 kernel: Hypervisor detected: KVM Jul 2 00:33:17.005629 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 2 00:33:17.005636 kernel: kvm-clock: using sched offset of 4369806539 cycles Jul 2 00:33:17.005645 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 2 00:33:17.005653 kernel: tsc: Detected 1996.249 MHz processor Jul 2 00:33:17.005661 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 2 00:33:17.005670 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 2 00:33:17.005678 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Jul 2 00:33:17.005686 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jul 2 00:33:17.005694 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 2 00:33:17.005704 kernel: ACPI: Early table checksum verification disabled Jul 2 00:33:17.005711 kernel: ACPI: RSDP 0x00000000000F5930 000014 (v00 BOCHS ) Jul 2 00:33:17.005719 kernel: ACPI: RSDT 0x000000007FFE1848 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:33:17.005727 kernel: ACPI: FACP 0x000000007FFE172C 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:33:17.005735 kernel: ACPI: DSDT 0x000000007FFE0040 0016EC (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:33:17.005743 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jul 2 00:33:17.005751 kernel: ACPI: APIC 0x000000007FFE17A0 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:33:17.005759 kernel: ACPI: WAET 0x000000007FFE1820 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:33:17.005766 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe172c-0x7ffe179f] Jul 2 00:33:17.005776 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe172b] Jul 2 00:33:17.005784 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jul 2 00:33:17.005792 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17a0-0x7ffe181f] Jul 2 00:33:17.005800 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe1820-0x7ffe1847] Jul 2 00:33:17.005807 kernel: No NUMA configuration found Jul 2 00:33:17.005815 kernel: Faking a node at [mem 0x0000000000000000-0x000000007ffdcfff] Jul 2 00:33:17.005823 kernel: NODE_DATA(0) allocated [mem 0x7ffd7000-0x7ffdcfff] Jul 2 00:33:17.005834 kernel: Zone ranges: Jul 2 00:33:17.005844 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 2 00:33:17.005852 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdcfff] Jul 2 00:33:17.005860 kernel: Normal empty Jul 2 00:33:17.005869 kernel: Movable zone start for each node Jul 2 00:33:17.005877 kernel: Early memory node ranges Jul 2 00:33:17.005885 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 2 00:33:17.005895 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Jul 2 00:33:17.005903 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdcfff] Jul 2 00:33:17.005911 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 2 00:33:17.005919 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 2 00:33:17.005927 kernel: On node 0, zone DMA32: 35 pages in unavailable ranges Jul 2 00:33:17.005936 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 2 00:33:17.005944 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 2 00:33:17.005952 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 2 00:33:17.005960 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 2 00:33:17.005971 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 2 00:33:17.005979 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 2 00:33:17.005987 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 2 00:33:17.005995 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 2 00:33:17.006004 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 2 00:33:17.006012 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jul 2 00:33:17.006020 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 2 00:33:17.006029 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Jul 2 00:33:17.006037 kernel: Booting paravirtualized kernel on KVM Jul 2 00:33:17.006046 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 2 00:33:17.006077 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jul 2 00:33:17.006087 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576 Jul 2 00:33:17.006096 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152 Jul 2 00:33:17.006104 kernel: pcpu-alloc: [0] 0 1 Jul 2 00:33:17.006112 kernel: kvm-guest: PV spinlocks disabled, no host support Jul 2 00:33:17.006123 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=7cbbc16c4aaa626caa51ed60a6754ae638f7b2b87370c3f4fc6a9772b7874a8b Jul 2 00:33:17.006132 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 2 00:33:17.006143 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 2 00:33:17.006151 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 2 00:33:17.006160 kernel: Fallback order for Node 0: 0 Jul 2 00:33:17.006168 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515805 Jul 2 00:33:17.006176 kernel: Policy zone: DMA32 Jul 2 00:33:17.006184 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 2 00:33:17.006193 kernel: Memory: 1965068K/2096620K available (12288K kernel code, 2303K rwdata, 22640K rodata, 49328K init, 2016K bss, 131292K reserved, 0K cma-reserved) Jul 2 00:33:17.006201 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 2 00:33:17.006210 kernel: ftrace: allocating 37658 entries in 148 pages Jul 2 00:33:17.006220 kernel: ftrace: allocated 148 pages with 3 groups Jul 2 00:33:17.006228 kernel: Dynamic Preempt: voluntary Jul 2 00:33:17.006237 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 2 00:33:17.006246 kernel: rcu: RCU event tracing is enabled. Jul 2 00:33:17.006255 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 2 00:33:17.006263 kernel: Trampoline variant of Tasks RCU enabled. Jul 2 00:33:17.006271 kernel: Rude variant of Tasks RCU enabled. Jul 2 00:33:17.006279 kernel: Tracing variant of Tasks RCU enabled. Jul 2 00:33:17.006288 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 2 00:33:17.006298 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 2 00:33:17.006309 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jul 2 00:33:17.006318 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 2 00:33:17.006327 kernel: Console: colour VGA+ 80x25 Jul 2 00:33:17.006335 kernel: printk: console [tty0] enabled Jul 2 00:33:17.006344 kernel: printk: console [ttyS0] enabled Jul 2 00:33:17.006353 kernel: ACPI: Core revision 20230628 Jul 2 00:33:17.006362 kernel: APIC: Switch to symmetric I/O mode setup Jul 2 00:33:17.006371 kernel: x2apic enabled Jul 2 00:33:17.006380 kernel: APIC: Switched APIC routing to: physical x2apic Jul 2 00:33:17.006391 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 2 00:33:17.006400 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jul 2 00:33:17.006410 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Jul 2 00:33:17.006419 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jul 2 00:33:17.006428 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jul 2 00:33:17.006437 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 2 00:33:17.006446 kernel: Spectre V2 : Mitigation: Retpolines Jul 2 00:33:17.006455 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jul 2 00:33:17.006464 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jul 2 00:33:17.006485 kernel: Speculative Store Bypass: Vulnerable Jul 2 00:33:17.006494 kernel: x86/fpu: x87 FPU will use FXSAVE Jul 2 00:33:17.006503 kernel: Freeing SMP alternatives memory: 32K Jul 2 00:33:17.006511 kernel: pid_max: default: 32768 minimum: 301 Jul 2 00:33:17.006520 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Jul 2 00:33:17.006529 kernel: SELinux: Initializing. Jul 2 00:33:17.006538 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 2 00:33:17.006547 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 2 00:33:17.006565 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Jul 2 00:33:17.006575 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jul 2 00:33:17.006585 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jul 2 00:33:17.006596 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jul 2 00:33:17.006605 kernel: Performance Events: AMD PMU driver. Jul 2 00:33:17.006614 kernel: ... version: 0 Jul 2 00:33:17.006623 kernel: ... bit width: 48 Jul 2 00:33:17.006633 kernel: ... generic registers: 4 Jul 2 00:33:17.006644 kernel: ... value mask: 0000ffffffffffff Jul 2 00:33:17.006653 kernel: ... max period: 00007fffffffffff Jul 2 00:33:17.006662 kernel: ... fixed-purpose events: 0 Jul 2 00:33:17.006671 kernel: ... event mask: 000000000000000f Jul 2 00:33:17.006681 kernel: signal: max sigframe size: 1440 Jul 2 00:33:17.006690 kernel: rcu: Hierarchical SRCU implementation. Jul 2 00:33:17.006699 kernel: rcu: Max phase no-delay instances is 400. Jul 2 00:33:17.006709 kernel: smp: Bringing up secondary CPUs ... Jul 2 00:33:17.006718 kernel: smpboot: x86: Booting SMP configuration: Jul 2 00:33:17.006727 kernel: .... node #0, CPUs: #1 Jul 2 00:33:17.006739 kernel: smp: Brought up 1 node, 2 CPUs Jul 2 00:33:17.006748 kernel: smpboot: Max logical packages: 2 Jul 2 00:33:17.006758 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Jul 2 00:33:17.006767 kernel: devtmpfs: initialized Jul 2 00:33:17.006776 kernel: x86/mm: Memory block size: 128MB Jul 2 00:33:17.006786 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 2 00:33:17.006795 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 2 00:33:17.006805 kernel: pinctrl core: initialized pinctrl subsystem Jul 2 00:33:17.006814 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 2 00:33:17.006826 kernel: audit: initializing netlink subsys (disabled) Jul 2 00:33:17.006835 kernel: audit: type=2000 audit(1719880395.759:1): state=initialized audit_enabled=0 res=1 Jul 2 00:33:17.006845 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 2 00:33:17.006854 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 2 00:33:17.006863 kernel: cpuidle: using governor menu Jul 2 00:33:17.006872 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 2 00:33:17.006882 kernel: dca service started, version 1.12.1 Jul 2 00:33:17.006891 kernel: PCI: Using configuration type 1 for base access Jul 2 00:33:17.006900 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 2 00:33:17.006912 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 2 00:33:17.006922 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 2 00:33:17.006931 kernel: ACPI: Added _OSI(Module Device) Jul 2 00:33:17.006941 kernel: ACPI: Added _OSI(Processor Device) Jul 2 00:33:17.006950 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 2 00:33:17.006959 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 2 00:33:17.006969 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 2 00:33:17.006979 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jul 2 00:33:17.006988 kernel: ACPI: Interpreter enabled Jul 2 00:33:17.007000 kernel: ACPI: PM: (supports S0 S3 S5) Jul 2 00:33:17.007009 kernel: ACPI: Using IOAPIC for interrupt routing Jul 2 00:33:17.007019 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 2 00:33:17.007028 kernel: PCI: Using E820 reservations for host bridge windows Jul 2 00:33:17.007037 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jul 2 00:33:17.007047 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 2 00:33:17.008823 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jul 2 00:33:17.008940 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jul 2 00:33:17.009042 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jul 2 00:33:17.009087 kernel: acpiphp: Slot [3] registered Jul 2 00:33:17.009099 kernel: acpiphp: Slot [4] registered Jul 2 00:33:17.009115 kernel: acpiphp: Slot [5] registered Jul 2 00:33:17.009125 kernel: acpiphp: Slot [6] registered Jul 2 00:33:17.009134 kernel: acpiphp: Slot [7] registered Jul 2 00:33:17.009143 kernel: acpiphp: Slot [8] registered Jul 2 00:33:17.009152 kernel: acpiphp: Slot [9] registered Jul 2 00:33:17.009167 kernel: acpiphp: Slot [10] registered Jul 2 00:33:17.009176 kernel: acpiphp: Slot [11] registered Jul 2 00:33:17.009185 kernel: acpiphp: Slot [12] registered Jul 2 00:33:17.009194 kernel: acpiphp: Slot [13] registered Jul 2 00:33:17.009204 kernel: acpiphp: Slot [14] registered Jul 2 00:33:17.009213 kernel: acpiphp: Slot [15] registered Jul 2 00:33:17.009222 kernel: acpiphp: Slot [16] registered Jul 2 00:33:17.009231 kernel: acpiphp: Slot [17] registered Jul 2 00:33:17.009240 kernel: acpiphp: Slot [18] registered Jul 2 00:33:17.009250 kernel: acpiphp: Slot [19] registered Jul 2 00:33:17.009262 kernel: acpiphp: Slot [20] registered Jul 2 00:33:17.009271 kernel: acpiphp: Slot [21] registered Jul 2 00:33:17.009280 kernel: acpiphp: Slot [22] registered Jul 2 00:33:17.009290 kernel: acpiphp: Slot [23] registered Jul 2 00:33:17.009299 kernel: acpiphp: Slot [24] registered Jul 2 00:33:17.009309 kernel: acpiphp: Slot [25] registered Jul 2 00:33:17.009318 kernel: acpiphp: Slot [26] registered Jul 2 00:33:17.009327 kernel: acpiphp: Slot [27] registered Jul 2 00:33:17.009336 kernel: acpiphp: Slot [28] registered Jul 2 00:33:17.009348 kernel: acpiphp: Slot [29] registered Jul 2 00:33:17.009357 kernel: acpiphp: Slot [30] registered Jul 2 00:33:17.009367 kernel: acpiphp: Slot [31] registered Jul 2 00:33:17.009376 kernel: PCI host bridge to bus 0000:00 Jul 2 00:33:17.009485 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 2 00:33:17.009576 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 2 00:33:17.009662 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 2 00:33:17.009747 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jul 2 00:33:17.009837 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jul 2 00:33:17.009922 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 2 00:33:17.010035 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jul 2 00:33:17.013219 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jul 2 00:33:17.013345 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jul 2 00:33:17.013452 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Jul 2 00:33:17.013572 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jul 2 00:33:17.013672 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jul 2 00:33:17.013771 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jul 2 00:33:17.013869 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jul 2 00:33:17.013977 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jul 2 00:33:17.014096 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jul 2 00:33:17.014197 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jul 2 00:33:17.014311 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jul 2 00:33:17.014434 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jul 2 00:33:17.014544 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Jul 2 00:33:17.014644 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Jul 2 00:33:17.014740 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Jul 2 00:33:17.014837 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 2 00:33:17.014950 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jul 2 00:33:17.015053 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Jul 2 00:33:17.019211 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Jul 2 00:33:17.019310 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Jul 2 00:33:17.019404 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Jul 2 00:33:17.019510 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jul 2 00:33:17.019606 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jul 2 00:33:17.019708 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Jul 2 00:33:17.019803 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Jul 2 00:33:17.019910 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Jul 2 00:33:17.020008 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Jul 2 00:33:17.021149 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Jul 2 00:33:17.021258 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Jul 2 00:33:17.021354 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Jul 2 00:33:17.021454 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Jul 2 00:33:17.021470 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 2 00:33:17.021481 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 2 00:33:17.021490 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 2 00:33:17.021499 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 2 00:33:17.021508 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jul 2 00:33:17.021517 kernel: iommu: Default domain type: Translated Jul 2 00:33:17.021526 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 2 00:33:17.021535 kernel: PCI: Using ACPI for IRQ routing Jul 2 00:33:17.021547 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 2 00:33:17.021556 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 2 00:33:17.021565 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Jul 2 00:33:17.021652 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jul 2 00:33:17.021740 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jul 2 00:33:17.021828 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 2 00:33:17.021842 kernel: vgaarb: loaded Jul 2 00:33:17.021851 kernel: clocksource: Switched to clocksource kvm-clock Jul 2 00:33:17.021860 kernel: VFS: Disk quotas dquot_6.6.0 Jul 2 00:33:17.021872 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 2 00:33:17.021881 kernel: pnp: PnP ACPI init Jul 2 00:33:17.021972 kernel: pnp 00:03: [dma 2] Jul 2 00:33:17.021987 kernel: pnp: PnP ACPI: found 5 devices Jul 2 00:33:17.021996 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 2 00:33:17.022005 kernel: NET: Registered PF_INET protocol family Jul 2 00:33:17.022014 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 2 00:33:17.022023 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jul 2 00:33:17.022035 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 2 00:33:17.022044 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 2 00:33:17.022053 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jul 2 00:33:17.024671 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jul 2 00:33:17.024683 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 2 00:33:17.024692 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 2 00:33:17.024701 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 2 00:33:17.024710 kernel: NET: Registered PF_XDP protocol family Jul 2 00:33:17.024807 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 2 00:33:17.024896 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 2 00:33:17.024974 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 2 00:33:17.025052 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jul 2 00:33:17.026178 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jul 2 00:33:17.026308 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jul 2 00:33:17.026458 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 2 00:33:17.026500 kernel: PCI: CLS 0 bytes, default 64 Jul 2 00:33:17.026515 kernel: Initialise system trusted keyrings Jul 2 00:33:17.026537 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jul 2 00:33:17.026552 kernel: Key type asymmetric registered Jul 2 00:33:17.026566 kernel: Asymmetric key parser 'x509' registered Jul 2 00:33:17.026581 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jul 2 00:33:17.026596 kernel: io scheduler mq-deadline registered Jul 2 00:33:17.026613 kernel: io scheduler kyber registered Jul 2 00:33:17.026630 kernel: io scheduler bfq registered Jul 2 00:33:17.026646 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 2 00:33:17.026662 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jul 2 00:33:17.026683 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jul 2 00:33:17.026697 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jul 2 00:33:17.026712 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jul 2 00:33:17.026729 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 2 00:33:17.026746 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 2 00:33:17.026761 kernel: random: crng init done Jul 2 00:33:17.026775 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 2 00:33:17.026786 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 2 00:33:17.026795 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 2 00:33:17.026920 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 2 00:33:17.027012 kernel: rtc_cmos 00:04: registered as rtc0 Jul 2 00:33:17.027027 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 2 00:33:17.028333 kernel: rtc_cmos 00:04: setting system clock to 2024-07-02T00:33:16 UTC (1719880396) Jul 2 00:33:17.028440 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jul 2 00:33:17.028454 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 2 00:33:17.028464 kernel: NET: Registered PF_INET6 protocol family Jul 2 00:33:17.028482 kernel: Segment Routing with IPv6 Jul 2 00:33:17.028491 kernel: In-situ OAM (IOAM) with IPv6 Jul 2 00:33:17.028500 kernel: NET: Registered PF_PACKET protocol family Jul 2 00:33:17.028509 kernel: Key type dns_resolver registered Jul 2 00:33:17.028518 kernel: IPI shorthand broadcast: enabled Jul 2 00:33:17.028527 kernel: sched_clock: Marking stable (1010009405, 124089497)->(1140438743, -6339841) Jul 2 00:33:17.028536 kernel: registered taskstats version 1 Jul 2 00:33:17.028545 kernel: Loading compiled-in X.509 certificates Jul 2 00:33:17.028554 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.36-flatcar: be1ede902d88b56c26cc000ff22391c78349d771' Jul 2 00:33:17.028563 kernel: Key type .fscrypt registered Jul 2 00:33:17.028578 kernel: Key type fscrypt-provisioning registered Jul 2 00:33:17.028587 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 2 00:33:17.028596 kernel: ima: Allocated hash algorithm: sha1 Jul 2 00:33:17.028605 kernel: ima: No architecture policies found Jul 2 00:33:17.028614 kernel: clk: Disabling unused clocks Jul 2 00:33:17.028623 kernel: Freeing unused kernel image (initmem) memory: 49328K Jul 2 00:33:17.028632 kernel: Write protecting the kernel read-only data: 36864k Jul 2 00:33:17.028641 kernel: Freeing unused kernel image (rodata/data gap) memory: 1936K Jul 2 00:33:17.028651 kernel: Run /init as init process Jul 2 00:33:17.028660 kernel: with arguments: Jul 2 00:33:17.028669 kernel: /init Jul 2 00:33:17.028678 kernel: with environment: Jul 2 00:33:17.028686 kernel: HOME=/ Jul 2 00:33:17.028695 kernel: TERM=linux Jul 2 00:33:17.028703 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 2 00:33:17.028716 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 2 00:33:17.028729 systemd[1]: Detected virtualization kvm. Jul 2 00:33:17.028739 systemd[1]: Detected architecture x86-64. Jul 2 00:33:17.028748 systemd[1]: Running in initrd. Jul 2 00:33:17.028758 systemd[1]: No hostname configured, using default hostname. Jul 2 00:33:17.028767 systemd[1]: Hostname set to . Jul 2 00:33:17.028777 systemd[1]: Initializing machine ID from VM UUID. Jul 2 00:33:17.028787 systemd[1]: Queued start job for default target initrd.target. Jul 2 00:33:17.028797 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 00:33:17.028809 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 00:33:17.028823 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 2 00:33:17.028833 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 2 00:33:17.028842 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 2 00:33:17.028853 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 2 00:33:17.028864 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 2 00:33:17.028876 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 2 00:33:17.028886 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 00:33:17.028895 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 00:33:17.028905 systemd[1]: Reached target paths.target - Path Units. Jul 2 00:33:17.028915 systemd[1]: Reached target slices.target - Slice Units. Jul 2 00:33:17.028934 systemd[1]: Reached target swap.target - Swaps. Jul 2 00:33:17.028945 systemd[1]: Reached target timers.target - Timer Units. Jul 2 00:33:17.028957 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 00:33:17.028967 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 00:33:17.028977 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 2 00:33:17.028987 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 2 00:33:17.028998 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 00:33:17.029007 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 00:33:17.029018 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 00:33:17.029028 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 00:33:17.029041 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 2 00:33:17.029051 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 00:33:17.031096 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 2 00:33:17.031111 systemd[1]: Starting systemd-fsck-usr.service... Jul 2 00:33:17.031122 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 00:33:17.031132 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 00:33:17.031148 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:33:17.031159 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 2 00:33:17.031170 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 00:33:17.031183 systemd[1]: Finished systemd-fsck-usr.service. Jul 2 00:33:17.031240 systemd-journald[184]: Collecting audit messages is disabled. Jul 2 00:33:17.031272 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 2 00:33:17.031283 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 2 00:33:17.031299 systemd-journald[184]: Journal started Jul 2 00:33:17.031325 systemd-journald[184]: Runtime Journal (/run/log/journal/956e047af5f84bb58ec61141ca701e29) is 4.9M, max 39.3M, 34.4M free. Jul 2 00:33:16.993819 systemd-modules-load[185]: Inserted module 'overlay' Jul 2 00:33:17.068109 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 2 00:33:17.068151 kernel: Bridge firewalling registered Jul 2 00:33:17.037935 systemd-modules-load[185]: Inserted module 'br_netfilter' Jul 2 00:33:17.071087 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 00:33:17.071350 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 00:33:17.072044 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:33:17.079225 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 00:33:17.081190 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 00:33:17.083320 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 00:33:17.088147 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 00:33:17.102692 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:33:17.105403 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 00:33:17.111852 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:33:17.116239 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 2 00:33:17.117597 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 00:33:17.124962 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 00:33:17.135319 dracut-cmdline[216]: dracut-dracut-053 Jul 2 00:33:17.142324 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=7cbbc16c4aaa626caa51ed60a6754ae638f7b2b87370c3f4fc6a9772b7874a8b Jul 2 00:33:17.161873 systemd-resolved[218]: Positive Trust Anchors: Jul 2 00:33:17.161891 systemd-resolved[218]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 00:33:17.161934 systemd-resolved[218]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jul 2 00:33:17.165108 systemd-resolved[218]: Defaulting to hostname 'linux'. Jul 2 00:33:17.166127 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 00:33:17.167451 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 00:33:17.264139 kernel: SCSI subsystem initialized Jul 2 00:33:17.277159 kernel: Loading iSCSI transport class v2.0-870. Jul 2 00:33:17.292114 kernel: iscsi: registered transport (tcp) Jul 2 00:33:17.320525 kernel: iscsi: registered transport (qla4xxx) Jul 2 00:33:17.320595 kernel: QLogic iSCSI HBA Driver Jul 2 00:33:17.382970 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 2 00:33:17.391388 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 2 00:33:17.446391 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 2 00:33:17.446458 kernel: device-mapper: uevent: version 1.0.3 Jul 2 00:33:17.448243 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 2 00:33:17.522332 kernel: raid6: sse2x4 gen() 5153 MB/s Jul 2 00:33:17.539141 kernel: raid6: sse2x2 gen() 9877 MB/s Jul 2 00:33:17.556260 kernel: raid6: sse2x1 gen() 9480 MB/s Jul 2 00:33:17.556334 kernel: raid6: using algorithm sse2x2 gen() 9877 MB/s Jul 2 00:33:17.574407 kernel: raid6: .... xor() 9136 MB/s, rmw enabled Jul 2 00:33:17.574538 kernel: raid6: using ssse3x2 recovery algorithm Jul 2 00:33:17.604553 kernel: xor: measuring software checksum speed Jul 2 00:33:17.604626 kernel: prefetch64-sse : 17376 MB/sec Jul 2 00:33:17.606410 kernel: generic_sse : 15788 MB/sec Jul 2 00:33:17.606498 kernel: xor: using function: prefetch64-sse (17376 MB/sec) Jul 2 00:33:17.824765 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 2 00:33:17.842731 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 2 00:33:17.851226 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 00:33:17.895957 systemd-udevd[402]: Using default interface naming scheme 'v255'. Jul 2 00:33:17.907178 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 00:33:17.916334 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 2 00:33:17.946974 dracut-pre-trigger[410]: rd.md=0: removing MD RAID activation Jul 2 00:33:17.997211 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 00:33:18.003260 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 00:33:18.079095 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 00:33:18.087699 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 2 00:33:18.108174 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 2 00:33:18.109684 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 00:33:18.111334 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 00:33:18.113382 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 00:33:18.120286 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 2 00:33:18.142152 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 2 00:33:18.170112 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues Jul 2 00:33:18.194688 kernel: virtio_blk virtio2: [vda] 41943040 512-byte logical blocks (21.5 GB/20.0 GiB) Jul 2 00:33:18.194817 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 2 00:33:18.194833 kernel: GPT:17805311 != 41943039 Jul 2 00:33:18.194845 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 2 00:33:18.194857 kernel: GPT:17805311 != 41943039 Jul 2 00:33:18.194869 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 2 00:33:18.194887 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 00:33:18.173294 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 00:33:18.173476 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:33:18.176280 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 00:33:18.176774 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 00:33:18.176906 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:33:18.177427 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:33:18.187670 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:33:18.223089 kernel: libata version 3.00 loaded. Jul 2 00:33:18.226078 kernel: ata_piix 0000:00:01.1: version 2.13 Jul 2 00:33:18.231675 kernel: scsi host0: ata_piix Jul 2 00:33:18.231818 kernel: scsi host1: ata_piix Jul 2 00:33:18.231938 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Jul 2 00:33:18.231953 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Jul 2 00:33:18.239096 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (450) Jul 2 00:33:18.244116 kernel: BTRFS: device fsid 2fd636b8-f582-46f8-bde2-15e56e3958c1 devid 1 transid 35 /dev/vda3 scanned by (udev-worker) (457) Jul 2 00:33:18.271623 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 2 00:33:18.295236 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 2 00:33:18.296199 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:33:18.303434 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 2 00:33:18.312232 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 2 00:33:18.312820 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 2 00:33:18.322255 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 2 00:33:18.325289 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 00:33:18.334424 disk-uuid[500]: Primary Header is updated. Jul 2 00:33:18.334424 disk-uuid[500]: Secondary Entries is updated. Jul 2 00:33:18.334424 disk-uuid[500]: Secondary Header is updated. Jul 2 00:33:18.342128 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 00:33:18.349143 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 00:33:18.364532 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:33:19.372147 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 00:33:19.374030 disk-uuid[501]: The operation has completed successfully. Jul 2 00:33:19.497281 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 2 00:33:19.497431 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 2 00:33:19.515217 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 2 00:33:19.531834 sh[523]: Success Jul 2 00:33:19.567095 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Jul 2 00:33:19.690218 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 2 00:33:19.692234 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 2 00:33:19.701251 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 2 00:33:19.807038 kernel: BTRFS info (device dm-0): first mount of filesystem 2fd636b8-f582-46f8-bde2-15e56e3958c1 Jul 2 00:33:19.807165 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 2 00:33:19.818967 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 2 00:33:19.822373 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 2 00:33:19.826447 kernel: BTRFS info (device dm-0): using free space tree Jul 2 00:33:19.979639 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 2 00:33:19.982025 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 2 00:33:19.988421 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 2 00:33:19.998379 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 2 00:33:20.026640 kernel: BTRFS info (device vda6): first mount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:33:20.026757 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 00:33:20.029270 kernel: BTRFS info (device vda6): using free space tree Jul 2 00:33:20.041134 kernel: BTRFS info (device vda6): auto enabling async discard Jul 2 00:33:20.065634 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 2 00:33:20.070155 kernel: BTRFS info (device vda6): last unmount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:33:20.082052 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 2 00:33:20.092757 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 2 00:33:20.157499 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 00:33:20.166284 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 00:33:20.192810 systemd-networkd[708]: lo: Link UP Jul 2 00:33:20.192820 systemd-networkd[708]: lo: Gained carrier Jul 2 00:33:20.193997 systemd-networkd[708]: Enumeration completed Jul 2 00:33:20.194102 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 00:33:20.194766 systemd[1]: Reached target network.target - Network. Jul 2 00:33:20.195775 systemd-networkd[708]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:33:20.195779 systemd-networkd[708]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 00:33:20.197328 systemd-networkd[708]: eth0: Link UP Jul 2 00:33:20.197332 systemd-networkd[708]: eth0: Gained carrier Jul 2 00:33:20.197339 systemd-networkd[708]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:33:20.211269 systemd-networkd[708]: eth0: DHCPv4 address 172.24.4.39/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jul 2 00:33:20.256718 ignition[628]: Ignition 2.18.0 Jul 2 00:33:20.256734 ignition[628]: Stage: fetch-offline Jul 2 00:33:20.256782 ignition[628]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:33:20.256798 ignition[628]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 2 00:33:20.259094 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 00:33:20.256962 ignition[628]: parsed url from cmdline: "" Jul 2 00:33:20.256966 ignition[628]: no config URL provided Jul 2 00:33:20.256972 ignition[628]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 00:33:20.256984 ignition[628]: no config at "/usr/lib/ignition/user.ign" Jul 2 00:33:20.256990 ignition[628]: failed to fetch config: resource requires networking Jul 2 00:33:20.257793 ignition[628]: Ignition finished successfully Jul 2 00:33:20.265308 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 2 00:33:20.280409 ignition[718]: Ignition 2.18.0 Jul 2 00:33:20.280422 ignition[718]: Stage: fetch Jul 2 00:33:20.280618 ignition[718]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:33:20.280630 ignition[718]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 2 00:33:20.280754 ignition[718]: parsed url from cmdline: "" Jul 2 00:33:20.280758 ignition[718]: no config URL provided Jul 2 00:33:20.280764 ignition[718]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 00:33:20.280773 ignition[718]: no config at "/usr/lib/ignition/user.ign" Jul 2 00:33:20.280957 ignition[718]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jul 2 00:33:20.281025 ignition[718]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jul 2 00:33:20.281079 ignition[718]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jul 2 00:33:20.591357 ignition[718]: GET result: OK Jul 2 00:33:20.591580 ignition[718]: parsing config with SHA512: aeb8c39ef91c71fbffecd83e40d95ac57f71947463fdf8536dc8df9f1aebc79f9f825bea2294c35fcc4f595eae0a964e8624654b19d9ff16d2340f2f44275471 Jul 2 00:33:20.602377 unknown[718]: fetched base config from "system" Jul 2 00:33:20.602408 unknown[718]: fetched base config from "system" Jul 2 00:33:20.603364 ignition[718]: fetch: fetch complete Jul 2 00:33:20.602422 unknown[718]: fetched user config from "openstack" Jul 2 00:33:20.603377 ignition[718]: fetch: fetch passed Jul 2 00:33:20.606938 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 2 00:33:20.603470 ignition[718]: Ignition finished successfully Jul 2 00:33:20.617540 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 2 00:33:20.652873 ignition[725]: Ignition 2.18.0 Jul 2 00:33:20.652906 ignition[725]: Stage: kargs Jul 2 00:33:20.653386 ignition[725]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:33:20.653414 ignition[725]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 2 00:33:20.656393 ignition[725]: kargs: kargs passed Jul 2 00:33:20.658708 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 2 00:33:20.656508 ignition[725]: Ignition finished successfully Jul 2 00:33:20.667520 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 2 00:33:20.688518 ignition[732]: Ignition 2.18.0 Jul 2 00:33:20.688539 ignition[732]: Stage: disks Jul 2 00:33:20.688814 ignition[732]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:33:20.688828 ignition[732]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 2 00:33:20.692423 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 2 00:33:20.690626 ignition[732]: disks: disks passed Jul 2 00:33:20.693612 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 2 00:33:20.690707 ignition[732]: Ignition finished successfully Jul 2 00:33:20.695374 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 2 00:33:20.697225 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 00:33:20.698815 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 00:33:20.700871 systemd[1]: Reached target basic.target - Basic System. Jul 2 00:33:20.708248 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 2 00:33:20.736005 systemd-fsck[741]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jul 2 00:33:20.749330 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 2 00:33:20.756291 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 2 00:33:20.921101 kernel: EXT4-fs (vda9): mounted filesystem c5a17c06-b440-4aab-a0fa-5b60bb1d8586 r/w with ordered data mode. Quota mode: none. Jul 2 00:33:20.922125 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 2 00:33:20.923783 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 2 00:33:20.937231 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 00:33:20.941198 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 2 00:33:20.942557 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 2 00:33:20.946429 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jul 2 00:33:20.949247 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 2 00:33:20.949325 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 00:33:20.953541 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 2 00:33:20.962263 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 2 00:33:20.963404 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (749) Jul 2 00:33:20.969239 kernel: BTRFS info (device vda6): first mount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:33:20.972224 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 00:33:20.972271 kernel: BTRFS info (device vda6): using free space tree Jul 2 00:33:20.989107 kernel: BTRFS info (device vda6): auto enabling async discard Jul 2 00:33:20.995042 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 00:33:21.115639 initrd-setup-root[777]: cut: /sysroot/etc/passwd: No such file or directory Jul 2 00:33:21.126889 initrd-setup-root[784]: cut: /sysroot/etc/group: No such file or directory Jul 2 00:33:21.131055 initrd-setup-root[791]: cut: /sysroot/etc/shadow: No such file or directory Jul 2 00:33:21.136355 initrd-setup-root[798]: cut: /sysroot/etc/gshadow: No such file or directory Jul 2 00:33:21.240867 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 2 00:33:21.256169 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 2 00:33:21.260101 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 2 00:33:21.265590 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 2 00:33:21.267324 kernel: BTRFS info (device vda6): last unmount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:33:21.296092 ignition[865]: INFO : Ignition 2.18.0 Jul 2 00:33:21.296092 ignition[865]: INFO : Stage: mount Jul 2 00:33:21.296092 ignition[865]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:33:21.296092 ignition[865]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 2 00:33:21.301175 ignition[865]: INFO : mount: mount passed Jul 2 00:33:21.301175 ignition[865]: INFO : Ignition finished successfully Jul 2 00:33:21.298320 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 2 00:33:21.303552 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 2 00:33:21.601550 systemd-networkd[708]: eth0: Gained IPv6LL Jul 2 00:33:28.218452 coreos-metadata[751]: Jul 02 00:33:28.218 WARN failed to locate config-drive, using the metadata service API instead Jul 2 00:33:28.259103 coreos-metadata[751]: Jul 02 00:33:28.258 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jul 2 00:33:28.275656 coreos-metadata[751]: Jul 02 00:33:28.275 INFO Fetch successful Jul 2 00:33:28.277258 coreos-metadata[751]: Jul 02 00:33:28.277 INFO wrote hostname ci-3975-1-1-4-69569a1933.novalocal to /sysroot/etc/hostname Jul 2 00:33:28.280041 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jul 2 00:33:28.280546 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jul 2 00:33:28.293288 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 2 00:33:28.334422 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 00:33:28.352123 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (883) Jul 2 00:33:28.359735 kernel: BTRFS info (device vda6): first mount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:33:28.359843 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 00:33:28.363383 kernel: BTRFS info (device vda6): using free space tree Jul 2 00:33:28.374186 kernel: BTRFS info (device vda6): auto enabling async discard Jul 2 00:33:28.379205 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 00:33:28.423223 ignition[901]: INFO : Ignition 2.18.0 Jul 2 00:33:28.427238 ignition[901]: INFO : Stage: files Jul 2 00:33:28.427238 ignition[901]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:33:28.427238 ignition[901]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 2 00:33:28.432533 ignition[901]: DEBUG : files: compiled without relabeling support, skipping Jul 2 00:33:28.435616 ignition[901]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 2 00:33:28.435616 ignition[901]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 2 00:33:28.441373 ignition[901]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 2 00:33:28.442324 ignition[901]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 2 00:33:28.443629 unknown[901]: wrote ssh authorized keys file for user: core Jul 2 00:33:28.446562 ignition[901]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 2 00:33:28.446562 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 00:33:28.446562 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 2 00:33:29.169285 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 2 00:33:29.523178 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 00:33:29.523178 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 2 00:33:29.523178 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 2 00:33:29.523178 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 2 00:33:29.532017 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 2 00:33:29.532017 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 00:33:29.532017 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 00:33:29.532017 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 00:33:29.532017 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 00:33:29.532017 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 00:33:29.532017 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 00:33:29.532017 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jul 2 00:33:29.532017 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jul 2 00:33:29.532017 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jul 2 00:33:29.532017 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jul 2 00:33:30.080748 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 2 00:33:31.752346 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jul 2 00:33:31.752346 ignition[901]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 2 00:33:31.786857 ignition[901]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 00:33:31.788385 ignition[901]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 00:33:31.788385 ignition[901]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 2 00:33:31.788385 ignition[901]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jul 2 00:33:31.788385 ignition[901]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jul 2 00:33:31.788385 ignition[901]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 2 00:33:31.788385 ignition[901]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 2 00:33:31.788385 ignition[901]: INFO : files: files passed Jul 2 00:33:31.788385 ignition[901]: INFO : Ignition finished successfully Jul 2 00:33:31.790301 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 2 00:33:31.802202 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 2 00:33:31.815197 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 2 00:33:31.819208 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 2 00:33:31.819379 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 2 00:33:31.827090 initrd-setup-root-after-ignition[930]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:33:31.827090 initrd-setup-root-after-ignition[930]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:33:31.829415 initrd-setup-root-after-ignition[934]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:33:31.829971 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 00:33:31.831726 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 2 00:33:31.837228 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 2 00:33:31.868980 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 2 00:33:31.869158 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 2 00:33:31.870756 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 2 00:33:31.871900 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 2 00:33:31.873277 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 2 00:33:31.880236 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 2 00:33:31.892512 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 00:33:31.899248 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 2 00:33:31.913948 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 2 00:33:31.914093 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 2 00:33:31.916921 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 2 00:33:31.917602 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 00:33:31.918987 systemd[1]: Stopped target timers.target - Timer Units. Jul 2 00:33:31.920350 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 2 00:33:31.920420 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 00:33:31.921817 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 2 00:33:31.922589 systemd[1]: Stopped target basic.target - Basic System. Jul 2 00:33:31.923892 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 2 00:33:31.925107 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 00:33:31.926262 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 2 00:33:31.927601 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 2 00:33:31.928921 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 00:33:31.930256 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 2 00:33:31.931531 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 2 00:33:31.932858 systemd[1]: Stopped target swap.target - Swaps. Jul 2 00:33:31.934122 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 2 00:33:31.934194 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 2 00:33:31.935687 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 2 00:33:31.936525 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 00:33:31.937764 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 2 00:33:31.940329 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 00:33:31.941289 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 2 00:33:31.941368 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 2 00:33:31.943208 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 2 00:33:31.943267 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 00:33:31.944025 systemd[1]: ignition-files.service: Deactivated successfully. Jul 2 00:33:31.944118 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 2 00:33:31.953174 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 2 00:33:31.955498 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 2 00:33:31.955588 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 00:33:31.959299 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 2 00:33:31.962217 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 2 00:33:31.963331 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 00:33:31.966187 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 2 00:33:31.966270 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 00:33:31.971807 ignition[955]: INFO : Ignition 2.18.0 Jul 2 00:33:31.971807 ignition[955]: INFO : Stage: umount Jul 2 00:33:31.974814 ignition[955]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:33:31.974814 ignition[955]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 2 00:33:31.974814 ignition[955]: INFO : umount: umount passed Jul 2 00:33:31.974814 ignition[955]: INFO : Ignition finished successfully Jul 2 00:33:31.980313 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 2 00:33:31.980457 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 2 00:33:31.981313 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 2 00:33:31.981370 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 2 00:33:31.982039 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 2 00:33:31.984100 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 2 00:33:31.985216 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 2 00:33:31.985262 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 2 00:33:31.989921 systemd[1]: Stopped target network.target - Network. Jul 2 00:33:31.991743 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 2 00:33:31.991854 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 00:33:31.993422 systemd[1]: Stopped target paths.target - Path Units. Jul 2 00:33:31.995134 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 2 00:33:31.999152 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 00:33:32.000440 systemd[1]: Stopped target slices.target - Slice Units. Jul 2 00:33:32.001974 systemd[1]: Stopped target sockets.target - Socket Units. Jul 2 00:33:32.003859 systemd[1]: iscsid.socket: Deactivated successfully. Jul 2 00:33:32.003978 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 00:33:32.005625 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 2 00:33:32.005698 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 00:33:32.007611 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 2 00:33:32.007707 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 2 00:33:32.009297 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 2 00:33:32.009382 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 2 00:33:32.010845 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 2 00:33:32.014119 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 2 00:33:32.019378 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 2 00:33:32.020122 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 2 00:33:32.020219 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 2 00:33:32.020261 systemd-networkd[708]: eth0: DHCPv6 lease lost Jul 2 00:33:32.023363 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 00:33:32.023600 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 2 00:33:32.027551 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 2 00:33:32.027753 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 2 00:33:32.032254 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 2 00:33:32.032510 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 2 00:33:32.033551 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 2 00:33:32.033620 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 2 00:33:32.042202 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 2 00:33:32.043673 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 2 00:33:32.043743 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 00:33:32.044341 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 00:33:32.044384 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:33:32.044893 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 2 00:33:32.044933 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 2 00:33:32.045575 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 2 00:33:32.045639 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 00:33:32.047046 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 00:33:32.058610 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 2 00:33:32.058771 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 00:33:32.060281 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 2 00:33:32.060341 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 2 00:33:32.061560 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 2 00:33:32.061589 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 00:33:32.062688 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 2 00:33:32.062733 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 2 00:33:32.065535 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 2 00:33:32.065583 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 2 00:33:32.066767 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 00:33:32.066813 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:33:32.079416 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 2 00:33:32.080020 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 2 00:33:32.080106 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 00:33:32.080699 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 00:33:32.080749 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:33:32.081663 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 2 00:33:32.081752 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 2 00:33:32.085822 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 2 00:33:32.085909 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 2 00:33:32.087003 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 2 00:33:32.093270 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 2 00:33:32.100869 systemd[1]: Switching root. Jul 2 00:33:32.122822 systemd-journald[184]: Journal stopped Jul 2 00:33:33.615431 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Jul 2 00:33:33.615481 kernel: SELinux: policy capability network_peer_controls=1 Jul 2 00:33:33.615503 kernel: SELinux: policy capability open_perms=1 Jul 2 00:33:33.615515 kernel: SELinux: policy capability extended_socket_class=1 Jul 2 00:33:33.615527 kernel: SELinux: policy capability always_check_network=0 Jul 2 00:33:33.615543 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 2 00:33:33.615558 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 2 00:33:33.615571 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 2 00:33:33.615588 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 2 00:33:33.615600 kernel: audit: type=1403 audit(1719880412.674:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 2 00:33:33.615613 systemd[1]: Successfully loaded SELinux policy in 70.638ms. Jul 2 00:33:33.615628 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 21.951ms. Jul 2 00:33:33.615642 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 2 00:33:33.615656 systemd[1]: Detected virtualization kvm. Jul 2 00:33:33.615671 systemd[1]: Detected architecture x86-64. Jul 2 00:33:33.615684 systemd[1]: Detected first boot. Jul 2 00:33:33.615697 systemd[1]: Hostname set to . Jul 2 00:33:33.615710 systemd[1]: Initializing machine ID from VM UUID. Jul 2 00:33:33.615726 zram_generator::config[996]: No configuration found. Jul 2 00:33:33.615739 systemd[1]: Populated /etc with preset unit settings. Jul 2 00:33:33.615753 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 2 00:33:33.615766 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 2 00:33:33.615784 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 2 00:33:33.615798 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 2 00:33:33.615812 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 2 00:33:33.615825 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 2 00:33:33.615837 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 2 00:33:33.615850 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 2 00:33:33.615863 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 2 00:33:33.615876 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 2 00:33:33.615888 systemd[1]: Created slice user.slice - User and Session Slice. Jul 2 00:33:33.615903 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 00:33:33.615916 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 00:33:33.615929 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 2 00:33:33.615942 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 2 00:33:33.615955 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 2 00:33:33.615969 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 2 00:33:33.615981 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 2 00:33:33.615993 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 00:33:33.616006 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 2 00:33:33.616021 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 2 00:33:33.616034 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 2 00:33:33.616047 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 2 00:33:33.619996 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 00:33:33.620031 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 00:33:33.620044 systemd[1]: Reached target slices.target - Slice Units. Jul 2 00:33:33.620081 systemd[1]: Reached target swap.target - Swaps. Jul 2 00:33:33.620096 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 2 00:33:33.620109 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 2 00:33:33.620122 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 00:33:33.620137 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 00:33:33.620149 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 00:33:33.620161 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 2 00:33:33.620174 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 2 00:33:33.620186 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 2 00:33:33.620201 systemd[1]: Mounting media.mount - External Media Directory... Jul 2 00:33:33.620219 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:33:33.620232 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 2 00:33:33.620245 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 2 00:33:33.620258 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 2 00:33:33.620271 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 2 00:33:33.620284 systemd[1]: Reached target machines.target - Containers. Jul 2 00:33:33.620297 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 2 00:33:33.620310 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:33:33.620326 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 00:33:33.620339 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 2 00:33:33.620352 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 00:33:33.620364 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 00:33:33.620377 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 00:33:33.620390 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 2 00:33:33.620402 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 00:33:33.620415 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 2 00:33:33.620431 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 2 00:33:33.620444 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 2 00:33:33.620457 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 2 00:33:33.620470 systemd[1]: Stopped systemd-fsck-usr.service. Jul 2 00:33:33.620482 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 00:33:33.620495 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 00:33:33.620507 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 2 00:33:33.620520 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 2 00:33:33.620532 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 00:33:33.620547 systemd[1]: verity-setup.service: Deactivated successfully. Jul 2 00:33:33.620559 systemd[1]: Stopped verity-setup.service. Jul 2 00:33:33.620572 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:33:33.620585 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 2 00:33:33.620597 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 2 00:33:33.620610 systemd[1]: Mounted media.mount - External Media Directory. Jul 2 00:33:33.620625 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 2 00:33:33.620638 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 2 00:33:33.620650 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 2 00:33:33.620695 systemd-journald[1084]: Collecting audit messages is disabled. Jul 2 00:33:33.620725 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 00:33:33.620738 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 2 00:33:33.620754 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 2 00:33:33.620767 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:33:33.620780 systemd-journald[1084]: Journal started Jul 2 00:33:33.620807 systemd-journald[1084]: Runtime Journal (/run/log/journal/956e047af5f84bb58ec61141ca701e29) is 4.9M, max 39.3M, 34.4M free. Jul 2 00:33:33.317356 systemd[1]: Queued start job for default target multi-user.target. Jul 2 00:33:33.341569 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 2 00:33:33.341916 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 2 00:33:33.626245 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 00:33:33.626294 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 00:33:33.628207 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:33:33.628357 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 00:33:33.630313 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 2 00:33:33.631107 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 2 00:33:33.643332 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 2 00:33:33.646113 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 00:33:33.652085 kernel: fuse: init (API version 7.39) Jul 2 00:33:33.659091 kernel: loop: module loaded Jul 2 00:33:33.659547 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:33:33.660189 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 00:33:33.661039 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 2 00:33:33.661189 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 2 00:33:33.665917 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 2 00:33:33.674217 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 2 00:33:33.683172 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 2 00:33:33.683970 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 2 00:33:33.684006 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 00:33:33.686600 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 2 00:33:33.698535 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 2 00:33:33.707330 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 2 00:33:33.707967 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:33:33.710287 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 2 00:33:33.725505 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 2 00:33:33.726675 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 00:33:33.746520 systemd-journald[1084]: Time spent on flushing to /var/log/journal/956e047af5f84bb58ec61141ca701e29 is 73.850ms for 924 entries. Jul 2 00:33:33.746520 systemd-journald[1084]: System Journal (/var/log/journal/956e047af5f84bb58ec61141ca701e29) is 8.0M, max 584.8M, 576.8M free. Jul 2 00:33:33.850210 systemd-journald[1084]: Received client request to flush runtime journal. Jul 2 00:33:33.850254 kernel: ACPI: bus type drm_connector registered Jul 2 00:33:33.850275 kernel: loop0: detected capacity change from 0 to 139904 Jul 2 00:33:33.850292 kernel: block loop0: the capability attribute has been deprecated. Jul 2 00:33:33.729777 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 2 00:33:33.730449 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 00:33:33.748302 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 00:33:33.750434 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 2 00:33:33.756305 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 2 00:33:33.758651 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 00:33:33.759531 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 2 00:33:33.760588 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 2 00:33:33.762380 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 2 00:33:33.774283 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 2 00:33:33.777052 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 00:33:33.777266 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 00:33:33.779799 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 2 00:33:33.781326 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 2 00:33:33.789332 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 2 00:33:33.836318 udevadm[1133]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 2 00:33:33.851530 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 2 00:33:33.861710 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:33:33.888388 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 2 00:33:33.888994 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 2 00:33:33.917140 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 2 00:33:33.928510 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 2 00:33:33.927828 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 00:33:33.948126 kernel: loop1: detected capacity change from 0 to 8 Jul 2 00:33:33.969094 kernel: loop2: detected capacity change from 0 to 211296 Jul 2 00:33:33.976271 systemd-tmpfiles[1147]: ACLs are not supported, ignoring. Jul 2 00:33:33.977282 systemd-tmpfiles[1147]: ACLs are not supported, ignoring. Jul 2 00:33:33.982747 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 00:33:34.034081 kernel: loop3: detected capacity change from 0 to 80568 Jul 2 00:33:34.104091 kernel: loop4: detected capacity change from 0 to 139904 Jul 2 00:33:34.184705 kernel: loop5: detected capacity change from 0 to 8 Jul 2 00:33:34.184783 kernel: loop6: detected capacity change from 0 to 211296 Jul 2 00:33:34.230114 kernel: loop7: detected capacity change from 0 to 80568 Jul 2 00:33:34.280383 (sd-merge)[1153]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jul 2 00:33:34.283131 (sd-merge)[1153]: Merged extensions into '/usr'. Jul 2 00:33:34.290000 systemd[1]: Reloading requested from client PID 1127 ('systemd-sysext') (unit systemd-sysext.service)... Jul 2 00:33:34.290022 systemd[1]: Reloading... Jul 2 00:33:34.377141 zram_generator::config[1177]: No configuration found. Jul 2 00:33:34.586312 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:33:34.646593 systemd[1]: Reloading finished in 355 ms. Jul 2 00:33:34.688771 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 2 00:33:34.702863 systemd[1]: Starting ensure-sysext.service... Jul 2 00:33:34.713268 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 00:33:34.743645 systemd[1]: Reloading requested from client PID 1232 ('systemctl') (unit ensure-sysext.service)... Jul 2 00:33:34.743793 systemd[1]: Reloading... Jul 2 00:33:34.779631 systemd-tmpfiles[1233]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 2 00:33:34.781149 systemd-tmpfiles[1233]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 2 00:33:34.782029 systemd-tmpfiles[1233]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 2 00:33:34.783439 systemd-tmpfiles[1233]: ACLs are not supported, ignoring. Jul 2 00:33:34.783502 systemd-tmpfiles[1233]: ACLs are not supported, ignoring. Jul 2 00:33:34.788390 systemd-tmpfiles[1233]: Detected autofs mount point /boot during canonicalization of boot. Jul 2 00:33:34.788405 systemd-tmpfiles[1233]: Skipping /boot Jul 2 00:33:34.794009 ldconfig[1122]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 2 00:33:34.802183 systemd-tmpfiles[1233]: Detected autofs mount point /boot during canonicalization of boot. Jul 2 00:33:34.802202 systemd-tmpfiles[1233]: Skipping /boot Jul 2 00:33:34.834113 zram_generator::config[1257]: No configuration found. Jul 2 00:33:34.987050 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:33:35.047986 systemd[1]: Reloading finished in 303 ms. Jul 2 00:33:35.071390 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 2 00:33:35.072537 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 2 00:33:35.076516 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 00:33:35.088567 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 00:33:35.094265 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 2 00:33:35.102326 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 2 00:33:35.108250 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 00:33:35.116277 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 00:33:35.120246 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 2 00:33:35.133438 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 2 00:33:35.137783 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:33:35.137952 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:33:35.145461 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 00:33:35.152158 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 00:33:35.165437 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 00:33:35.166907 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:33:35.167081 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:33:35.173349 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:33:35.173583 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:33:35.173821 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:33:35.173993 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:33:35.181968 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 2 00:33:35.191766 systemd[1]: Finished ensure-sysext.service. Jul 2 00:33:35.193203 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 2 00:33:35.201369 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:33:35.201525 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 00:33:35.203360 systemd-udevd[1324]: Using default interface naming scheme 'v255'. Jul 2 00:33:35.211015 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:33:35.211699 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 00:33:35.214859 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:33:35.215315 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:33:35.222271 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 00:33:35.223010 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:33:35.223152 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 00:33:35.228675 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 2 00:33:35.235759 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 2 00:33:35.236393 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:33:35.236866 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:33:35.237115 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 00:33:35.238573 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 00:33:35.248402 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 00:33:35.248559 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 00:33:35.266615 augenrules[1354]: No rules Jul 2 00:33:35.268149 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 00:33:35.270153 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 2 00:33:35.274154 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 00:33:35.286387 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 00:33:35.304158 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 2 00:33:35.306157 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 2 00:33:35.306990 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 00:33:35.364171 systemd-resolved[1323]: Positive Trust Anchors: Jul 2 00:33:35.364618 systemd-resolved[1323]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 00:33:35.364726 systemd-resolved[1323]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jul 2 00:33:35.373404 systemd-resolved[1323]: Using system hostname 'ci-3975-1-1-4-69569a1933.novalocal'. Jul 2 00:33:35.376249 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 00:33:35.377209 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 00:33:35.420106 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1376) Jul 2 00:33:35.453483 systemd-networkd[1365]: lo: Link UP Jul 2 00:33:35.453495 systemd-networkd[1365]: lo: Gained carrier Jul 2 00:33:35.460088 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1374) Jul 2 00:33:35.464984 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 2 00:33:35.465720 systemd[1]: Reached target time-set.target - System Time Set. Jul 2 00:33:35.475663 systemd-networkd[1365]: Enumeration completed Jul 2 00:33:35.475731 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 00:33:35.476602 systemd-timesyncd[1347]: No network connectivity, watching for changes. Jul 2 00:33:35.476753 systemd[1]: Reached target network.target - Network. Jul 2 00:33:35.477217 systemd-networkd[1365]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:33:35.477228 systemd-networkd[1365]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 00:33:35.482328 systemd-networkd[1365]: eth0: Link UP Jul 2 00:33:35.482339 systemd-networkd[1365]: eth0: Gained carrier Jul 2 00:33:35.482352 systemd-networkd[1365]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:33:35.483709 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 2 00:33:35.494197 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 2 00:33:35.495142 systemd-networkd[1365]: eth0: DHCPv4 address 172.24.4.39/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jul 2 00:33:35.496558 systemd-timesyncd[1347]: Network configuration changed, trying to establish connection. Jul 2 00:33:35.542401 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jul 2 00:33:35.564981 systemd-networkd[1365]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:33:35.577153 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jul 2 00:33:35.579512 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 2 00:33:35.589278 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 2 00:33:35.596112 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 2 00:33:35.616205 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 2 00:33:35.620081 kernel: ACPI: button: Power Button [PWRF] Jul 2 00:33:35.637093 kernel: mousedev: PS/2 mouse device common for all mice Jul 2 00:33:35.645146 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jul 2 00:33:35.645257 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jul 2 00:33:35.648321 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:33:35.650189 kernel: Console: switching to colour dummy device 80x25 Jul 2 00:33:35.652488 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jul 2 00:33:35.652528 kernel: [drm] features: -context_init Jul 2 00:33:35.655721 kernel: [drm] number of scanouts: 1 Jul 2 00:33:35.655790 kernel: [drm] number of cap sets: 0 Jul 2 00:33:35.655808 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jul 2 00:33:35.662818 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 00:33:35.663094 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:33:35.669513 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jul 2 00:33:35.669600 kernel: Console: switching to colour frame buffer device 128x48 Jul 2 00:33:35.673337 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:33:35.674106 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jul 2 00:33:35.691477 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 00:33:35.691733 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:33:35.698256 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:33:35.699254 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 2 00:33:35.706248 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 2 00:33:35.728414 lvm[1407]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 00:33:35.760487 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 2 00:33:35.762748 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 00:33:35.772513 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 2 00:33:35.777706 lvm[1412]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 00:33:35.800383 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:33:35.801831 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 2 00:33:35.804732 systemd-timesyncd[1347]: Contacted time server 51.158.153.13:123 (0.flatcar.pool.ntp.org). Jul 2 00:33:35.804815 systemd-timesyncd[1347]: Initial clock synchronization to Tue 2024-07-02 00:33:35.951535 UTC. Jul 2 00:33:35.805545 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 00:33:35.805777 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 2 00:33:35.805911 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 2 00:33:35.806410 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 2 00:33:35.807608 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 2 00:33:35.807695 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 2 00:33:35.807774 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 00:33:35.807804 systemd[1]: Reached target paths.target - Path Units. Jul 2 00:33:35.807866 systemd[1]: Reached target timers.target - Timer Units. Jul 2 00:33:35.809725 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 2 00:33:35.811233 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 2 00:33:35.818309 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 2 00:33:35.823124 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 2 00:33:35.824365 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 00:33:35.826165 systemd[1]: Reached target basic.target - Basic System. Jul 2 00:33:35.827807 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 2 00:33:35.828044 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 2 00:33:35.844319 systemd[1]: Starting containerd.service - containerd container runtime... Jul 2 00:33:35.864461 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 2 00:33:35.877483 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 2 00:33:35.895397 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 2 00:33:35.908434 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 2 00:33:35.912543 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 2 00:33:35.920249 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 2 00:33:35.927184 jq[1423]: false Jul 2 00:33:35.931289 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 2 00:33:35.939359 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 2 00:33:35.951293 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 2 00:33:35.957917 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 2 00:33:35.960562 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 2 00:33:35.961027 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 2 00:33:35.968268 systemd[1]: Starting update-engine.service - Update Engine... Jul 2 00:33:35.970963 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 2 00:33:35.980811 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 2 00:33:35.983014 extend-filesystems[1424]: Found loop4 Jul 2 00:33:35.983014 extend-filesystems[1424]: Found loop5 Jul 2 00:33:35.983014 extend-filesystems[1424]: Found loop6 Jul 2 00:33:35.983014 extend-filesystems[1424]: Found loop7 Jul 2 00:33:35.983014 extend-filesystems[1424]: Found vda Jul 2 00:33:35.983014 extend-filesystems[1424]: Found vda1 Jul 2 00:33:35.983014 extend-filesystems[1424]: Found vda2 Jul 2 00:33:35.983014 extend-filesystems[1424]: Found vda3 Jul 2 00:33:35.983014 extend-filesystems[1424]: Found usr Jul 2 00:33:35.983014 extend-filesystems[1424]: Found vda4 Jul 2 00:33:35.983014 extend-filesystems[1424]: Found vda6 Jul 2 00:33:35.983014 extend-filesystems[1424]: Found vda7 Jul 2 00:33:35.983014 extend-filesystems[1424]: Found vda9 Jul 2 00:33:35.983014 extend-filesystems[1424]: Checking size of /dev/vda9 Jul 2 00:33:35.981002 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 2 00:33:36.084722 extend-filesystems[1424]: Resized partition /dev/vda9 Jul 2 00:33:35.991842 dbus-daemon[1420]: [system] SELinux support is enabled Jul 2 00:33:35.984419 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 2 00:33:36.088832 extend-filesystems[1455]: resize2fs 1.47.0 (5-Feb-2023) Jul 2 00:33:35.984586 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 2 00:33:36.092589 update_engine[1435]: I0702 00:33:36.030174 1435 main.cc:92] Flatcar Update Engine starting Jul 2 00:33:36.092589 update_engine[1435]: I0702 00:33:36.076318 1435 update_check_scheduler.cc:74] Next update check in 5m32s Jul 2 00:33:35.995502 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 2 00:33:36.034277 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 2 00:33:36.034327 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 2 00:33:36.095598 jq[1438]: true Jul 2 00:33:36.055047 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 2 00:33:36.055071 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 2 00:33:36.055549 (ntainerd)[1444]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 2 00:33:36.058225 systemd[1]: motdgen.service: Deactivated successfully. Jul 2 00:33:36.058553 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 2 00:33:36.067918 systemd[1]: Started update-engine.service - Update Engine. Jul 2 00:33:36.102036 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 4635643 blocks Jul 2 00:33:36.101344 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 2 00:33:36.105208 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1369) Jul 2 00:33:36.117187 tar[1441]: linux-amd64/helm Jul 2 00:33:36.150171 jq[1456]: true Jul 2 00:33:36.266324 systemd-logind[1432]: New seat seat0. Jul 2 00:33:36.311878 locksmithd[1458]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 2 00:33:36.366644 systemd-logind[1432]: Watching system buttons on /dev/input/event2 (Power Button) Jul 2 00:33:36.514341 kernel: EXT4-fs (vda9): resized filesystem to 4635643 Jul 2 00:33:36.514432 extend-filesystems[1455]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 2 00:33:36.514432 extend-filesystems[1455]: old_desc_blocks = 1, new_desc_blocks = 3 Jul 2 00:33:36.514432 extend-filesystems[1455]: The filesystem on /dev/vda9 is now 4635643 (4k) blocks long. Jul 2 00:33:36.366667 systemd-logind[1432]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 2 00:33:36.531790 extend-filesystems[1424]: Resized filesystem in /dev/vda9 Jul 2 00:33:36.366871 systemd[1]: Started systemd-logind.service - User Login Management. Jul 2 00:33:36.508668 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 2 00:33:36.509278 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 2 00:33:36.546663 bash[1476]: Updated "/home/core/.ssh/authorized_keys" Jul 2 00:33:36.548961 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 2 00:33:36.563406 systemd[1]: Starting sshkeys.service... Jul 2 00:33:36.587394 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 2 00:33:36.601583 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 2 00:33:36.616146 containerd[1444]: time="2024-07-02T00:33:36.615651218Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Jul 2 00:33:36.668810 containerd[1444]: time="2024-07-02T00:33:36.668554808Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 2 00:33:36.668810 containerd[1444]: time="2024-07-02T00:33:36.668616004Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:33:36.674456 containerd[1444]: time="2024-07-02T00:33:36.674424621Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.36-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:33:36.675601 containerd[1444]: time="2024-07-02T00:33:36.674860594Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:33:36.677605 containerd[1444]: time="2024-07-02T00:33:36.677117437Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:33:36.677605 containerd[1444]: time="2024-07-02T00:33:36.677272777Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 2 00:33:36.677605 containerd[1444]: time="2024-07-02T00:33:36.677391192Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 2 00:33:36.677605 containerd[1444]: time="2024-07-02T00:33:36.677462450Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:33:36.677605 containerd[1444]: time="2024-07-02T00:33:36.677483203Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 2 00:33:36.677605 containerd[1444]: time="2024-07-02T00:33:36.677568367Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:33:36.678121 containerd[1444]: time="2024-07-02T00:33:36.677803739Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 2 00:33:36.678121 containerd[1444]: time="2024-07-02T00:33:36.677835083Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 2 00:33:36.678121 containerd[1444]: time="2024-07-02T00:33:36.677849081Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:33:36.678121 containerd[1444]: time="2024-07-02T00:33:36.677971333Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:33:36.678121 containerd[1444]: time="2024-07-02T00:33:36.677989586Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 2 00:33:36.678121 containerd[1444]: time="2024-07-02T00:33:36.678047161Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 2 00:33:36.678121 containerd[1444]: time="2024-07-02T00:33:36.678062711Z" level=info msg="metadata content store policy set" policy=shared Jul 2 00:33:36.688893 containerd[1444]: time="2024-07-02T00:33:36.688746637Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 2 00:33:36.688893 containerd[1444]: time="2024-07-02T00:33:36.688778093Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 2 00:33:36.688893 containerd[1444]: time="2024-07-02T00:33:36.688794163Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 2 00:33:36.688893 containerd[1444]: time="2024-07-02T00:33:36.688835943Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 2 00:33:36.688893 containerd[1444]: time="2024-07-02T00:33:36.688854054Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 2 00:33:36.688893 containerd[1444]: time="2024-07-02T00:33:36.688865563Z" level=info msg="NRI interface is disabled by configuration." Jul 2 00:33:36.688893 containerd[1444]: time="2024-07-02T00:33:36.688878501Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 2 00:33:36.689228 containerd[1444]: time="2024-07-02T00:33:36.689024586Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 2 00:33:36.689228 containerd[1444]: time="2024-07-02T00:33:36.689045359Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 2 00:33:36.690623 containerd[1444]: time="2024-07-02T00:33:36.690092183Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 2 00:33:36.690623 containerd[1444]: time="2024-07-02T00:33:36.690134464Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 2 00:33:36.690623 containerd[1444]: time="2024-07-02T00:33:36.690151759Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 2 00:33:36.690623 containerd[1444]: time="2024-07-02T00:33:36.690170144Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 2 00:33:36.690623 containerd[1444]: time="2024-07-02T00:33:36.690189336Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 2 00:33:36.690623 containerd[1444]: time="2024-07-02T00:33:36.690204069Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 2 00:33:36.690623 containerd[1444]: time="2024-07-02T00:33:36.690220536Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 2 00:33:36.690623 containerd[1444]: time="2024-07-02T00:33:36.690239167Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 2 00:33:36.690623 containerd[1444]: time="2024-07-02T00:33:36.690254268Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 2 00:33:36.690623 containerd[1444]: time="2024-07-02T00:33:36.690267725Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 2 00:33:36.690623 containerd[1444]: time="2024-07-02T00:33:36.690373009Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 2 00:33:36.690623 containerd[1444]: time="2024-07-02T00:33:36.690628257Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 2 00:33:36.690938 containerd[1444]: time="2024-07-02T00:33:36.690656478Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 2 00:33:36.690938 containerd[1444]: time="2024-07-02T00:33:36.690672548Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 2 00:33:36.690938 containerd[1444]: time="2024-07-02T00:33:36.690695740Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 2 00:33:36.690938 containerd[1444]: time="2024-07-02T00:33:36.690747346Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 2 00:33:36.690938 containerd[1444]: time="2024-07-02T00:33:36.690762600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 2 00:33:36.690938 containerd[1444]: time="2024-07-02T00:33:36.690776802Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 2 00:33:36.690938 containerd[1444]: time="2024-07-02T00:33:36.690791392Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 2 00:33:36.690938 containerd[1444]: time="2024-07-02T00:33:36.690805339Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 2 00:33:36.690938 containerd[1444]: time="2024-07-02T00:33:36.690818858Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 2 00:33:36.690938 containerd[1444]: time="2024-07-02T00:33:36.690832296Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 2 00:33:36.690938 containerd[1444]: time="2024-07-02T00:33:36.690845508Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 2 00:33:36.690938 containerd[1444]: time="2024-07-02T00:33:36.690861027Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 2 00:33:36.691209 containerd[1444]: time="2024-07-02T00:33:36.691029142Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 2 00:33:36.691209 containerd[1444]: time="2024-07-02T00:33:36.691050364Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 2 00:33:36.691209 containerd[1444]: time="2024-07-02T00:33:36.691065148Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 2 00:33:36.693801 containerd[1444]: time="2024-07-02T00:33:36.693180578Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 2 00:33:36.693801 containerd[1444]: time="2024-07-02T00:33:36.693211514Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 2 00:33:36.693801 containerd[1444]: time="2024-07-02T00:33:36.693229297Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 2 00:33:36.693801 containerd[1444]: time="2024-07-02T00:33:36.693243306Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 2 00:33:36.693801 containerd[1444]: time="2024-07-02T00:33:36.693256345Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 2 00:33:36.693935 containerd[1444]: time="2024-07-02T00:33:36.693524398Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 2 00:33:36.693935 containerd[1444]: time="2024-07-02T00:33:36.693594217Z" level=info msg="Connect containerd service" Jul 2 00:33:36.693935 containerd[1444]: time="2024-07-02T00:33:36.693619989Z" level=info msg="using legacy CRI server" Jul 2 00:33:36.693935 containerd[1444]: time="2024-07-02T00:33:36.693627794Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 2 00:33:36.693935 containerd[1444]: time="2024-07-02T00:33:36.693709561Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 2 00:33:36.696833 containerd[1444]: time="2024-07-02T00:33:36.696386235Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 00:33:36.696833 containerd[1444]: time="2024-07-02T00:33:36.696449076Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 2 00:33:36.696833 containerd[1444]: time="2024-07-02T00:33:36.696470094Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 2 00:33:36.696833 containerd[1444]: time="2024-07-02T00:33:36.696482664Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 2 00:33:36.696833 containerd[1444]: time="2024-07-02T00:33:36.696553442Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 2 00:33:36.696833 containerd[1444]: time="2024-07-02T00:33:36.696514711Z" level=info msg="Start subscribing containerd event" Jul 2 00:33:36.696833 containerd[1444]: time="2024-07-02T00:33:36.696649033Z" level=info msg="Start recovering state" Jul 2 00:33:36.696833 containerd[1444]: time="2024-07-02T00:33:36.696702477Z" level=info msg="Start event monitor" Jul 2 00:33:36.696833 containerd[1444]: time="2024-07-02T00:33:36.696719138Z" level=info msg="Start snapshots syncer" Jul 2 00:33:36.696833 containerd[1444]: time="2024-07-02T00:33:36.696729066Z" level=info msg="Start cni network conf syncer for default" Jul 2 00:33:36.696833 containerd[1444]: time="2024-07-02T00:33:36.696737442Z" level=info msg="Start streaming server" Jul 2 00:33:36.700624 containerd[1444]: time="2024-07-02T00:33:36.697596174Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 2 00:33:36.700624 containerd[1444]: time="2024-07-02T00:33:36.697707121Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 2 00:33:36.697887 systemd[1]: Started containerd.service - containerd container runtime. Jul 2 00:33:36.713579 containerd[1444]: time="2024-07-02T00:33:36.713226424Z" level=info msg="containerd successfully booted in 0.099328s" Jul 2 00:33:36.962363 systemd-networkd[1365]: eth0: Gained IPv6LL Jul 2 00:33:36.966215 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 2 00:33:36.972607 systemd[1]: Reached target network-online.target - Network is Online. Jul 2 00:33:36.983400 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:33:36.996626 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 2 00:33:37.052907 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 2 00:33:37.107062 tar[1441]: linux-amd64/LICENSE Jul 2 00:33:37.107319 tar[1441]: linux-amd64/README.md Jul 2 00:33:37.129658 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 2 00:33:37.587515 sshd_keygen[1447]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 2 00:33:37.621347 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 2 00:33:37.628139 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 2 00:33:37.639487 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 2 00:33:37.653599 systemd[1]: Started sshd@0-172.24.4.39:22-172.24.4.1:34674.service - OpenSSH per-connection server daemon (172.24.4.1:34674). Jul 2 00:33:37.657459 systemd[1]: issuegen.service: Deactivated successfully. Jul 2 00:33:37.657654 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 2 00:33:37.671040 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 2 00:33:37.684648 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 2 00:33:37.695301 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 2 00:33:37.707611 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 2 00:33:37.709601 systemd[1]: Reached target getty.target - Login Prompts. Jul 2 00:33:38.443002 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:33:38.458144 (kubelet)[1536]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:33:39.288667 sshd[1521]: Accepted publickey for core from 172.24.4.1 port 34674 ssh2: RSA SHA256:PCQlpQPF3MQUBJB7FGXO+NVNbcy5KrkTt8QQEvm9Ado Jul 2 00:33:39.293397 sshd[1521]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:33:39.323300 systemd-logind[1432]: New session 1 of user core. Jul 2 00:33:39.329187 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 2 00:33:39.342595 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 2 00:33:39.372341 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 2 00:33:39.385501 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 2 00:33:39.391434 (systemd)[1544]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:33:39.521145 systemd[1544]: Queued start job for default target default.target. Jul 2 00:33:39.533370 systemd[1544]: Created slice app.slice - User Application Slice. Jul 2 00:33:39.533576 systemd[1544]: Reached target paths.target - Paths. Jul 2 00:33:39.533607 systemd[1544]: Reached target timers.target - Timers. Jul 2 00:33:39.540286 systemd[1544]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 2 00:33:39.556463 systemd[1544]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 2 00:33:39.556643 systemd[1544]: Reached target sockets.target - Sockets. Jul 2 00:33:39.556671 systemd[1544]: Reached target basic.target - Basic System. Jul 2 00:33:39.556738 systemd[1544]: Reached target default.target - Main User Target. Jul 2 00:33:39.556794 systemd[1544]: Startup finished in 158ms. Jul 2 00:33:39.556956 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 2 00:33:39.570571 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 2 00:33:39.899808 kubelet[1536]: E0702 00:33:39.899593 1536 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:33:39.902497 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:33:39.902654 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:33:39.902981 systemd[1]: kubelet.service: Consumed 1.745s CPU time. Jul 2 00:33:39.976814 systemd[1]: Started sshd@1-172.24.4.39:22-172.24.4.1:34688.service - OpenSSH per-connection server daemon (172.24.4.1:34688). Jul 2 00:33:42.122063 sshd[1559]: Accepted publickey for core from 172.24.4.1 port 34688 ssh2: RSA SHA256:PCQlpQPF3MQUBJB7FGXO+NVNbcy5KrkTt8QQEvm9Ado Jul 2 00:33:42.125373 sshd[1559]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:33:42.136585 systemd-logind[1432]: New session 2 of user core. Jul 2 00:33:42.148510 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 2 00:33:42.751702 login[1528]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 2 00:33:42.756758 login[1529]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 2 00:33:42.761840 systemd-logind[1432]: New session 4 of user core. Jul 2 00:33:42.772403 sshd[1559]: pam_unix(sshd:session): session closed for user core Jul 2 00:33:42.774577 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 2 00:33:42.797820 systemd[1]: Started sshd@2-172.24.4.39:22-172.24.4.1:34698.service - OpenSSH per-connection server daemon (172.24.4.1:34698). Jul 2 00:33:42.804198 systemd-logind[1432]: New session 3 of user core. Jul 2 00:33:42.815363 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 2 00:33:42.816566 systemd[1]: sshd@1-172.24.4.39:22-172.24.4.1:34688.service: Deactivated successfully. Jul 2 00:33:42.824235 systemd[1]: session-2.scope: Deactivated successfully. Jul 2 00:33:42.830208 systemd-logind[1432]: Session 2 logged out. Waiting for processes to exit. Jul 2 00:33:42.837010 systemd-logind[1432]: Removed session 2. Jul 2 00:33:42.970621 coreos-metadata[1419]: Jul 02 00:33:42.970 WARN failed to locate config-drive, using the metadata service API instead Jul 2 00:33:43.020349 coreos-metadata[1419]: Jul 02 00:33:43.019 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jul 2 00:33:43.234996 coreos-metadata[1419]: Jul 02 00:33:43.234 INFO Fetch successful Jul 2 00:33:43.234996 coreos-metadata[1419]: Jul 02 00:33:43.234 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jul 2 00:33:43.253924 coreos-metadata[1419]: Jul 02 00:33:43.253 INFO Fetch successful Jul 2 00:33:43.253924 coreos-metadata[1419]: Jul 02 00:33:43.253 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jul 2 00:33:43.268921 coreos-metadata[1419]: Jul 02 00:33:43.268 INFO Fetch successful Jul 2 00:33:43.268921 coreos-metadata[1419]: Jul 02 00:33:43.268 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jul 2 00:33:43.285451 coreos-metadata[1419]: Jul 02 00:33:43.285 INFO Fetch successful Jul 2 00:33:43.285451 coreos-metadata[1419]: Jul 02 00:33:43.285 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jul 2 00:33:43.301698 coreos-metadata[1419]: Jul 02 00:33:43.301 INFO Fetch successful Jul 2 00:33:43.301698 coreos-metadata[1419]: Jul 02 00:33:43.301 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jul 2 00:33:43.313465 coreos-metadata[1419]: Jul 02 00:33:43.313 INFO Fetch successful Jul 2 00:33:43.367694 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 2 00:33:43.369857 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 2 00:33:43.725669 coreos-metadata[1491]: Jul 02 00:33:43.725 WARN failed to locate config-drive, using the metadata service API instead Jul 2 00:33:43.767248 coreos-metadata[1491]: Jul 02 00:33:43.767 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jul 2 00:33:43.785370 coreos-metadata[1491]: Jul 02 00:33:43.785 INFO Fetch successful Jul 2 00:33:43.785370 coreos-metadata[1491]: Jul 02 00:33:43.785 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jul 2 00:33:43.803389 coreos-metadata[1491]: Jul 02 00:33:43.803 INFO Fetch successful Jul 2 00:33:43.809157 unknown[1491]: wrote ssh authorized keys file for user: core Jul 2 00:33:43.857313 update-ssh-keys[1595]: Updated "/home/core/.ssh/authorized_keys" Jul 2 00:33:43.858509 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 2 00:33:43.862911 systemd[1]: Finished sshkeys.service. Jul 2 00:33:43.868256 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 2 00:33:43.868860 systemd[1]: Startup finished in 1.175s (kernel) + 15.913s (initrd) + 11.263s (userspace) = 28.352s. Jul 2 00:33:44.273867 sshd[1568]: Accepted publickey for core from 172.24.4.1 port 34698 ssh2: RSA SHA256:PCQlpQPF3MQUBJB7FGXO+NVNbcy5KrkTt8QQEvm9Ado Jul 2 00:33:44.276820 sshd[1568]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:33:44.288796 systemd-logind[1432]: New session 5 of user core. Jul 2 00:33:44.297393 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 2 00:33:45.061431 sshd[1568]: pam_unix(sshd:session): session closed for user core Jul 2 00:33:45.067276 systemd[1]: sshd@2-172.24.4.39:22-172.24.4.1:34698.service: Deactivated successfully. Jul 2 00:33:45.070683 systemd[1]: session-5.scope: Deactivated successfully. Jul 2 00:33:45.074554 systemd-logind[1432]: Session 5 logged out. Waiting for processes to exit. Jul 2 00:33:45.076882 systemd-logind[1432]: Removed session 5. Jul 2 00:33:50.157541 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 2 00:33:50.171668 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:33:50.529176 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:33:50.539414 (kubelet)[1611]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:33:50.745344 kubelet[1611]: E0702 00:33:50.745198 1611 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:33:50.752366 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:33:50.752648 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:33:55.126639 systemd[1]: Started sshd@3-172.24.4.39:22-172.24.4.1:53172.service - OpenSSH per-connection server daemon (172.24.4.1:53172). Jul 2 00:33:56.322135 sshd[1620]: Accepted publickey for core from 172.24.4.1 port 53172 ssh2: RSA SHA256:PCQlpQPF3MQUBJB7FGXO+NVNbcy5KrkTt8QQEvm9Ado Jul 2 00:33:56.325254 sshd[1620]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:33:56.337476 systemd-logind[1432]: New session 6 of user core. Jul 2 00:33:56.347379 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 2 00:33:56.969163 sshd[1620]: pam_unix(sshd:session): session closed for user core Jul 2 00:33:56.981526 systemd[1]: sshd@3-172.24.4.39:22-172.24.4.1:53172.service: Deactivated successfully. Jul 2 00:33:56.984879 systemd[1]: session-6.scope: Deactivated successfully. Jul 2 00:33:56.988399 systemd-logind[1432]: Session 6 logged out. Waiting for processes to exit. Jul 2 00:33:56.996673 systemd[1]: Started sshd@4-172.24.4.39:22-172.24.4.1:53184.service - OpenSSH per-connection server daemon (172.24.4.1:53184). Jul 2 00:33:56.999611 systemd-logind[1432]: Removed session 6. Jul 2 00:33:58.370205 sshd[1627]: Accepted publickey for core from 172.24.4.1 port 53184 ssh2: RSA SHA256:PCQlpQPF3MQUBJB7FGXO+NVNbcy5KrkTt8QQEvm9Ado Jul 2 00:33:58.373589 sshd[1627]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:33:58.395385 systemd-logind[1432]: New session 7 of user core. Jul 2 00:33:58.406447 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 2 00:33:59.143448 sshd[1627]: pam_unix(sshd:session): session closed for user core Jul 2 00:33:59.154902 systemd[1]: sshd@4-172.24.4.39:22-172.24.4.1:53184.service: Deactivated successfully. Jul 2 00:33:59.159900 systemd[1]: session-7.scope: Deactivated successfully. Jul 2 00:33:59.162347 systemd-logind[1432]: Session 7 logged out. Waiting for processes to exit. Jul 2 00:33:59.170653 systemd[1]: Started sshd@5-172.24.4.39:22-172.24.4.1:53192.service - OpenSSH per-connection server daemon (172.24.4.1:53192). Jul 2 00:33:59.173349 systemd-logind[1432]: Removed session 7. Jul 2 00:34:00.820980 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 2 00:34:00.829502 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:34:00.843952 sshd[1634]: Accepted publickey for core from 172.24.4.1 port 53192 ssh2: RSA SHA256:PCQlpQPF3MQUBJB7FGXO+NVNbcy5KrkTt8QQEvm9Ado Jul 2 00:34:00.847718 sshd[1634]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:34:00.875225 systemd-logind[1432]: New session 8 of user core. Jul 2 00:34:00.880595 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 2 00:34:01.181598 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:34:01.197935 (kubelet)[1645]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:34:01.287953 kubelet[1645]: E0702 00:34:01.287882 1645 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:34:01.292312 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:34:01.292479 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:34:01.718558 sshd[1634]: pam_unix(sshd:session): session closed for user core Jul 2 00:34:01.737214 systemd[1]: sshd@5-172.24.4.39:22-172.24.4.1:53192.service: Deactivated successfully. Jul 2 00:34:01.740245 systemd[1]: session-8.scope: Deactivated successfully. Jul 2 00:34:01.741747 systemd-logind[1432]: Session 8 logged out. Waiting for processes to exit. Jul 2 00:34:01.760719 systemd[1]: Started sshd@6-172.24.4.39:22-172.24.4.1:53204.service - OpenSSH per-connection server daemon (172.24.4.1:53204). Jul 2 00:34:01.764413 systemd-logind[1432]: Removed session 8. Jul 2 00:34:02.845993 sshd[1657]: Accepted publickey for core from 172.24.4.1 port 53204 ssh2: RSA SHA256:PCQlpQPF3MQUBJB7FGXO+NVNbcy5KrkTt8QQEvm9Ado Jul 2 00:34:02.849397 sshd[1657]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:34:02.858740 systemd-logind[1432]: New session 9 of user core. Jul 2 00:34:02.868366 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 2 00:34:03.375987 sudo[1660]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 2 00:34:03.376665 sudo[1660]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:34:03.397510 sudo[1660]: pam_unix(sudo:session): session closed for user root Jul 2 00:34:03.670557 sshd[1657]: pam_unix(sshd:session): session closed for user core Jul 2 00:34:03.683424 systemd[1]: sshd@6-172.24.4.39:22-172.24.4.1:53204.service: Deactivated successfully. Jul 2 00:34:03.686393 systemd[1]: session-9.scope: Deactivated successfully. Jul 2 00:34:03.687924 systemd-logind[1432]: Session 9 logged out. Waiting for processes to exit. Jul 2 00:34:03.695662 systemd[1]: Started sshd@7-172.24.4.39:22-172.24.4.1:53208.service - OpenSSH per-connection server daemon (172.24.4.1:53208). Jul 2 00:34:03.699568 systemd-logind[1432]: Removed session 9. Jul 2 00:34:04.898675 sshd[1665]: Accepted publickey for core from 172.24.4.1 port 53208 ssh2: RSA SHA256:PCQlpQPF3MQUBJB7FGXO+NVNbcy5KrkTt8QQEvm9Ado Jul 2 00:34:04.901460 sshd[1665]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:34:04.912454 systemd-logind[1432]: New session 10 of user core. Jul 2 00:34:04.921426 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 2 00:34:05.392349 sudo[1669]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 2 00:34:05.392947 sudo[1669]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:34:05.400053 sudo[1669]: pam_unix(sudo:session): session closed for user root Jul 2 00:34:05.413733 sudo[1668]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 2 00:34:05.414390 sudo[1668]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:34:05.443667 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 2 00:34:05.448502 auditctl[1672]: No rules Jul 2 00:34:05.449248 systemd[1]: audit-rules.service: Deactivated successfully. Jul 2 00:34:05.449654 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 2 00:34:05.459301 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 00:34:05.517426 augenrules[1690]: No rules Jul 2 00:34:05.520199 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 00:34:05.522588 sudo[1668]: pam_unix(sudo:session): session closed for user root Jul 2 00:34:05.710316 sshd[1665]: pam_unix(sshd:session): session closed for user core Jul 2 00:34:05.723613 systemd[1]: sshd@7-172.24.4.39:22-172.24.4.1:53208.service: Deactivated successfully. Jul 2 00:34:05.726573 systemd[1]: session-10.scope: Deactivated successfully. Jul 2 00:34:05.729706 systemd-logind[1432]: Session 10 logged out. Waiting for processes to exit. Jul 2 00:34:05.737670 systemd[1]: Started sshd@8-172.24.4.39:22-172.24.4.1:54804.service - OpenSSH per-connection server daemon (172.24.4.1:54804). Jul 2 00:34:05.740940 systemd-logind[1432]: Removed session 10. Jul 2 00:34:06.895848 sshd[1698]: Accepted publickey for core from 172.24.4.1 port 54804 ssh2: RSA SHA256:PCQlpQPF3MQUBJB7FGXO+NVNbcy5KrkTt8QQEvm9Ado Jul 2 00:34:06.899007 sshd[1698]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:34:06.908915 systemd-logind[1432]: New session 11 of user core. Jul 2 00:34:06.919386 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 2 00:34:07.389039 sudo[1701]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 2 00:34:07.390563 sudo[1701]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:34:07.645351 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 2 00:34:07.645751 (dockerd)[1710]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 2 00:34:08.125072 dockerd[1710]: time="2024-07-02T00:34:08.124768993Z" level=info msg="Starting up" Jul 2 00:34:08.208897 dockerd[1710]: time="2024-07-02T00:34:08.208695670Z" level=info msg="Loading containers: start." Jul 2 00:34:08.418422 kernel: Initializing XFRM netlink socket Jul 2 00:34:08.574046 systemd-networkd[1365]: docker0: Link UP Jul 2 00:34:08.591302 dockerd[1710]: time="2024-07-02T00:34:08.591251760Z" level=info msg="Loading containers: done." Jul 2 00:34:08.733549 dockerd[1710]: time="2024-07-02T00:34:08.732278726Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 2 00:34:08.733549 dockerd[1710]: time="2024-07-02T00:34:08.732629671Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jul 2 00:34:08.733549 dockerd[1710]: time="2024-07-02T00:34:08.732851721Z" level=info msg="Daemon has completed initialization" Jul 2 00:34:08.733239 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2879067073-merged.mount: Deactivated successfully. Jul 2 00:34:08.790022 dockerd[1710]: time="2024-07-02T00:34:08.789424618Z" level=info msg="API listen on /run/docker.sock" Jul 2 00:34:08.790343 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 2 00:34:10.388099 containerd[1444]: time="2024-07-02T00:34:10.387923896Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.6\"" Jul 2 00:34:11.226974 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3827450005.mount: Deactivated successfully. Jul 2 00:34:11.320243 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 2 00:34:11.331534 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:34:11.429153 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:34:11.433833 (kubelet)[1860]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:34:11.753162 kubelet[1860]: E0702 00:34:11.753020 1860 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:34:11.757802 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:34:11.758187 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:34:13.869497 containerd[1444]: time="2024-07-02T00:34:13.869396736Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:34:13.871090 containerd[1444]: time="2024-07-02T00:34:13.870892578Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.6: active requests=0, bytes read=35235845" Jul 2 00:34:13.872377 containerd[1444]: time="2024-07-02T00:34:13.872328928Z" level=info msg="ImageCreate event name:\"sha256:3af2ab51e136465590d968a2052e02e180fc7967a03724b269c1337e8f09d36f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:34:13.875690 containerd[1444]: time="2024-07-02T00:34:13.875617174Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:f4d993b3d73cc0d59558be584b5b40785b4a96874bc76873b69d1dd818485e70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:34:13.877184 containerd[1444]: time="2024-07-02T00:34:13.876984460Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.6\" with image id \"sha256:3af2ab51e136465590d968a2052e02e180fc7967a03724b269c1337e8f09d36f\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:f4d993b3d73cc0d59558be584b5b40785b4a96874bc76873b69d1dd818485e70\", size \"35232637\" in 3.488929578s" Jul 2 00:34:13.877184 containerd[1444]: time="2024-07-02T00:34:13.877023770Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.6\" returns image reference \"sha256:3af2ab51e136465590d968a2052e02e180fc7967a03724b269c1337e8f09d36f\"" Jul 2 00:34:13.901074 containerd[1444]: time="2024-07-02T00:34:13.900960155Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.6\"" Jul 2 00:34:16.425983 containerd[1444]: time="2024-07-02T00:34:16.425770593Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:34:16.430528 containerd[1444]: time="2024-07-02T00:34:16.430413222Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.6: active requests=0, bytes read=32069755" Jul 2 00:34:16.433649 containerd[1444]: time="2024-07-02T00:34:16.433509604Z" level=info msg="ImageCreate event name:\"sha256:083b81fc09e858d3e0d4b42f567a9d44a2232b60bac396a94cbdd7ce1098235e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:34:16.440850 containerd[1444]: time="2024-07-02T00:34:16.440755084Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:692fc3f88a60b3afc76492ad347306d34042000f56f230959e9367fd59c48b1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:34:16.445271 containerd[1444]: time="2024-07-02T00:34:16.444363093Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.6\" with image id \"sha256:083b81fc09e858d3e0d4b42f567a9d44a2232b60bac396a94cbdd7ce1098235e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:692fc3f88a60b3afc76492ad347306d34042000f56f230959e9367fd59c48b1e\", size \"33590639\" in 2.543112184s" Jul 2 00:34:16.445271 containerd[1444]: time="2024-07-02T00:34:16.444448430Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.6\" returns image reference \"sha256:083b81fc09e858d3e0d4b42f567a9d44a2232b60bac396a94cbdd7ce1098235e\"" Jul 2 00:34:16.499503 containerd[1444]: time="2024-07-02T00:34:16.499441741Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.6\"" Jul 2 00:34:18.415499 containerd[1444]: time="2024-07-02T00:34:18.415265774Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:34:18.418203 containerd[1444]: time="2024-07-02T00:34:18.417886862Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.6: active requests=0, bytes read=17153811" Jul 2 00:34:18.419709 containerd[1444]: time="2024-07-02T00:34:18.419602876Z" level=info msg="ImageCreate event name:\"sha256:49d9b8328a8fda6ebca6b3226c6d722d92ec7adffff18668511a88058444cf15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:34:18.430754 containerd[1444]: time="2024-07-02T00:34:18.430619233Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b91a4e45debd0d5336d9f533aefdf47d4b39b24071feb459e521709b9e4ec24f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:34:18.433621 containerd[1444]: time="2024-07-02T00:34:18.433547742Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.6\" with image id \"sha256:49d9b8328a8fda6ebca6b3226c6d722d92ec7adffff18668511a88058444cf15\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b91a4e45debd0d5336d9f533aefdf47d4b39b24071feb459e521709b9e4ec24f\", size \"18674713\" in 1.933773073s" Jul 2 00:34:18.434039 containerd[1444]: time="2024-07-02T00:34:18.433882358Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.6\" returns image reference \"sha256:49d9b8328a8fda6ebca6b3226c6d722d92ec7adffff18668511a88058444cf15\"" Jul 2 00:34:18.485660 containerd[1444]: time="2024-07-02T00:34:18.485560254Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.6\"" Jul 2 00:34:19.898754 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3748577336.mount: Deactivated successfully. Jul 2 00:34:20.696147 containerd[1444]: time="2024-07-02T00:34:20.695931594Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:34:20.698040 containerd[1444]: time="2024-07-02T00:34:20.697814091Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.6: active requests=0, bytes read=28409342" Jul 2 00:34:20.700119 containerd[1444]: time="2024-07-02T00:34:20.699290000Z" level=info msg="ImageCreate event name:\"sha256:9c49592198fa15b509fe4ee4a538067866776e325d6dd33c77ad6647e1d3aac9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:34:20.705643 containerd[1444]: time="2024-07-02T00:34:20.705572762Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:88bacb3e1d6c0c37c6da95c6d6b8e30531d0b4d0ab540cc290b0af51fbfebd90\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:34:20.707683 containerd[1444]: time="2024-07-02T00:34:20.707603736Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.6\" with image id \"sha256:9c49592198fa15b509fe4ee4a538067866776e325d6dd33c77ad6647e1d3aac9\", repo tag \"registry.k8s.io/kube-proxy:v1.29.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:88bacb3e1d6c0c37c6da95c6d6b8e30531d0b4d0ab540cc290b0af51fbfebd90\", size \"28408353\" in 2.221960235s" Jul 2 00:34:20.707803 containerd[1444]: time="2024-07-02T00:34:20.707681748Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.6\" returns image reference \"sha256:9c49592198fa15b509fe4ee4a538067866776e325d6dd33c77ad6647e1d3aac9\"" Jul 2 00:34:20.759021 containerd[1444]: time="2024-07-02T00:34:20.758924692Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jul 2 00:34:21.460711 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1075160341.mount: Deactivated successfully. Jul 2 00:34:21.527271 update_engine[1435]: I0702 00:34:21.527206 1435 update_attempter.cc:509] Updating boot flags... Jul 2 00:34:21.591988 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1967) Jul 2 00:34:21.659094 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1967) Jul 2 00:34:21.731077 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1967) Jul 2 00:34:21.759977 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 2 00:34:21.772701 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:34:22.167821 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:34:22.176456 (kubelet)[1983]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:34:22.253563 kubelet[1983]: E0702 00:34:22.253505 1983 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:34:22.255547 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:34:22.255852 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:34:22.918115 containerd[1444]: time="2024-07-02T00:34:22.917221671Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:34:22.919507 containerd[1444]: time="2024-07-02T00:34:22.919460872Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Jul 2 00:34:22.920818 containerd[1444]: time="2024-07-02T00:34:22.920774677Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:34:22.924131 containerd[1444]: time="2024-07-02T00:34:22.924040827Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:34:22.925516 containerd[1444]: time="2024-07-02T00:34:22.925289172Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.166279814s" Jul 2 00:34:22.925516 containerd[1444]: time="2024-07-02T00:34:22.925332842Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jul 2 00:34:22.950853 containerd[1444]: time="2024-07-02T00:34:22.950819526Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jul 2 00:34:23.513173 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1293994351.mount: Deactivated successfully. Jul 2 00:34:23.523313 containerd[1444]: time="2024-07-02T00:34:23.523183871Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:34:23.525353 containerd[1444]: time="2024-07-02T00:34:23.525206216Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Jul 2 00:34:23.526755 containerd[1444]: time="2024-07-02T00:34:23.526619591Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:34:23.532431 containerd[1444]: time="2024-07-02T00:34:23.532291072Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:34:23.535271 containerd[1444]: time="2024-07-02T00:34:23.534598033Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 583.567265ms" Jul 2 00:34:23.535271 containerd[1444]: time="2024-07-02T00:34:23.534676190Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jul 2 00:34:23.582999 containerd[1444]: time="2024-07-02T00:34:23.582900922Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jul 2 00:34:24.254849 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3286260495.mount: Deactivated successfully. Jul 2 00:34:27.248440 containerd[1444]: time="2024-07-02T00:34:27.248384187Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:34:27.250091 containerd[1444]: time="2024-07-02T00:34:27.250004542Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651633" Jul 2 00:34:27.251389 containerd[1444]: time="2024-07-02T00:34:27.251336975Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:34:27.255770 containerd[1444]: time="2024-07-02T00:34:27.255734391Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:34:27.262089 containerd[1444]: time="2024-07-02T00:34:27.260946400Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 3.677972533s" Jul 2 00:34:27.262089 containerd[1444]: time="2024-07-02T00:34:27.261732750Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jul 2 00:34:32.312781 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jul 2 00:34:32.326490 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:34:32.349011 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 2 00:34:32.349345 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 2 00:34:32.349897 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:34:32.359711 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:34:32.392012 systemd[1]: Reloading requested from client PID 2151 ('systemctl') (unit session-11.scope)... Jul 2 00:34:32.392484 systemd[1]: Reloading... Jul 2 00:34:32.507098 zram_generator::config[2185]: No configuration found. Jul 2 00:34:32.886309 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:34:32.972056 systemd[1]: Reloading finished in 578 ms. Jul 2 00:34:33.045478 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:34:33.058339 (kubelet)[2246]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 2 00:34:33.059344 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:34:33.060024 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 00:34:33.060395 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:34:33.068633 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:34:33.197503 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:34:33.202717 (kubelet)[2258]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 2 00:34:33.492485 kubelet[2258]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:34:33.492485 kubelet[2258]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 00:34:33.492485 kubelet[2258]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:34:33.492485 kubelet[2258]: I0702 00:34:33.491457 2258 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 00:34:34.593295 kubelet[2258]: I0702 00:34:34.593209 2258 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jul 2 00:34:34.593295 kubelet[2258]: I0702 00:34:34.593264 2258 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 00:34:34.593295 kubelet[2258]: I0702 00:34:34.593717 2258 server.go:919] "Client rotation is on, will bootstrap in background" Jul 2 00:34:34.637621 kubelet[2258]: E0702 00:34:34.637571 2258 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.39:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.39:6443: connect: connection refused Jul 2 00:34:34.648211 kubelet[2258]: I0702 00:34:34.647100 2258 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 00:34:34.677132 kubelet[2258]: I0702 00:34:34.676857 2258 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 00:34:34.677617 kubelet[2258]: I0702 00:34:34.677592 2258 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 00:34:34.680605 kubelet[2258]: I0702 00:34:34.680170 2258 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 00:34:34.680605 kubelet[2258]: I0702 00:34:34.680228 2258 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 00:34:34.680605 kubelet[2258]: I0702 00:34:34.680256 2258 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 00:34:34.684746 kubelet[2258]: I0702 00:34:34.684471 2258 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:34:34.691281 kubelet[2258]: W0702 00:34:34.691201 2258 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.24.4.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975-1-1-4-69569a1933.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.39:6443: connect: connection refused Jul 2 00:34:34.691486 kubelet[2258]: E0702 00:34:34.691461 2258 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975-1-1-4-69569a1933.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.39:6443: connect: connection refused Jul 2 00:34:34.691837 kubelet[2258]: I0702 00:34:34.691660 2258 kubelet.go:396] "Attempting to sync node with API server" Jul 2 00:34:34.691837 kubelet[2258]: I0702 00:34:34.691704 2258 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 00:34:34.691837 kubelet[2258]: I0702 00:34:34.691787 2258 kubelet.go:312] "Adding apiserver pod source" Jul 2 00:34:34.694110 kubelet[2258]: I0702 00:34:34.693455 2258 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 00:34:34.697799 kubelet[2258]: W0702 00:34:34.697721 2258 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.24.4.39:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.39:6443: connect: connection refused Jul 2 00:34:34.698118 kubelet[2258]: E0702 00:34:34.698039 2258 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.39:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.39:6443: connect: connection refused Jul 2 00:34:34.699285 kubelet[2258]: I0702 00:34:34.699243 2258 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Jul 2 00:34:34.707422 kubelet[2258]: I0702 00:34:34.707357 2258 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 00:34:34.712498 kubelet[2258]: W0702 00:34:34.712438 2258 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 2 00:34:34.714196 kubelet[2258]: I0702 00:34:34.713616 2258 server.go:1256] "Started kubelet" Jul 2 00:34:34.716039 kubelet[2258]: I0702 00:34:34.715978 2258 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 00:34:34.736031 kubelet[2258]: I0702 00:34:34.730148 2258 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 00:34:34.742202 kubelet[2258]: I0702 00:34:34.740984 2258 server.go:461] "Adding debug handlers to kubelet server" Jul 2 00:34:34.746626 kubelet[2258]: I0702 00:34:34.745022 2258 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 00:34:34.746626 kubelet[2258]: I0702 00:34:34.745513 2258 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 00:34:34.751550 kubelet[2258]: I0702 00:34:34.751500 2258 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 00:34:34.753962 kubelet[2258]: I0702 00:34:34.753602 2258 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 00:34:34.753962 kubelet[2258]: I0702 00:34:34.753777 2258 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 00:34:34.761993 kubelet[2258]: E0702 00:34:34.761940 2258 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.39:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.39:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3975-1-1-4-69569a1933.novalocal.17de3e2def6eafe8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3975-1-1-4-69569a1933.novalocal,UID:ci-3975-1-1-4-69569a1933.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3975-1-1-4-69569a1933.novalocal,},FirstTimestamp:2024-07-02 00:34:34.713567208 +0000 UTC m=+1.505165411,LastTimestamp:2024-07-02 00:34:34.713567208 +0000 UTC m=+1.505165411,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3975-1-1-4-69569a1933.novalocal,}" Jul 2 00:34:34.762589 kubelet[2258]: W0702 00:34:34.762510 2258 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.24.4.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.39:6443: connect: connection refused Jul 2 00:34:34.762633 kubelet[2258]: E0702 00:34:34.762619 2258 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.39:6443: connect: connection refused Jul 2 00:34:34.762811 kubelet[2258]: E0702 00:34:34.762782 2258 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975-1-1-4-69569a1933.novalocal?timeout=10s\": dial tcp 172.24.4.39:6443: connect: connection refused" interval="200ms" Jul 2 00:34:34.763355 kubelet[2258]: I0702 00:34:34.763311 2258 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 00:34:34.767824 kubelet[2258]: E0702 00:34:34.767758 2258 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 00:34:34.768036 kubelet[2258]: I0702 00:34:34.767980 2258 factory.go:221] Registration of the containerd container factory successfully Jul 2 00:34:34.768036 kubelet[2258]: I0702 00:34:34.768025 2258 factory.go:221] Registration of the systemd container factory successfully Jul 2 00:34:34.789405 kubelet[2258]: I0702 00:34:34.789361 2258 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 00:34:34.792244 kubelet[2258]: I0702 00:34:34.792089 2258 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 00:34:34.792244 kubelet[2258]: I0702 00:34:34.792128 2258 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 00:34:34.792244 kubelet[2258]: I0702 00:34:34.792158 2258 kubelet.go:2329] "Starting kubelet main sync loop" Jul 2 00:34:34.792244 kubelet[2258]: E0702 00:34:34.792224 2258 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 00:34:34.801743 kubelet[2258]: W0702 00:34:34.801043 2258 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.24.4.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.39:6443: connect: connection refused Jul 2 00:34:34.801743 kubelet[2258]: E0702 00:34:34.801123 2258 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.39:6443: connect: connection refused Jul 2 00:34:34.826105 kubelet[2258]: I0702 00:34:34.826055 2258 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 00:34:34.826105 kubelet[2258]: I0702 00:34:34.826090 2258 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 00:34:34.826105 kubelet[2258]: I0702 00:34:34.826106 2258 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:34:34.856415 kubelet[2258]: I0702 00:34:34.856181 2258 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975-1-1-4-69569a1933.novalocal" Jul 2 00:34:34.856840 kubelet[2258]: I0702 00:34:34.856199 2258 policy_none.go:49] "None policy: Start" Jul 2 00:34:34.857739 kubelet[2258]: E0702 00:34:34.857602 2258 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.39:6443/api/v1/nodes\": dial tcp 172.24.4.39:6443: connect: connection refused" node="ci-3975-1-1-4-69569a1933.novalocal" Jul 2 00:34:34.858703 kubelet[2258]: I0702 00:34:34.858558 2258 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 00:34:34.858703 kubelet[2258]: I0702 00:34:34.858600 2258 state_mem.go:35] "Initializing new in-memory state store" Jul 2 00:34:34.875484 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 2 00:34:34.894163 kubelet[2258]: E0702 00:34:34.893410 2258 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 2 00:34:34.935680 kubelet[2258]: I0702 00:34:34.928798 2258 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 00:34:34.935680 kubelet[2258]: I0702 00:34:34.929429 2258 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 00:34:34.894180 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 2 00:34:34.916000 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 2 00:34:34.949706 kubelet[2258]: E0702 00:34:34.938292 2258 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3975-1-1-4-69569a1933.novalocal\" not found" Jul 2 00:34:34.963908 kubelet[2258]: E0702 00:34:34.963855 2258 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975-1-1-4-69569a1933.novalocal?timeout=10s\": dial tcp 172.24.4.39:6443: connect: connection refused" interval="400ms" Jul 2 00:34:35.062255 kubelet[2258]: I0702 00:34:35.062164 2258 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975-1-1-4-69569a1933.novalocal" Jul 2 00:34:35.062871 kubelet[2258]: E0702 00:34:35.062824 2258 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.39:6443/api/v1/nodes\": dial tcp 172.24.4.39:6443: connect: connection refused" node="ci-3975-1-1-4-69569a1933.novalocal" Jul 2 00:34:35.094274 kubelet[2258]: I0702 00:34:35.094127 2258 topology_manager.go:215] "Topology Admit Handler" podUID="2fca3c62f0226648cd3a1bf1a935102f" podNamespace="kube-system" podName="kube-apiserver-ci-3975-1-1-4-69569a1933.novalocal" Jul 2 00:34:35.096568 kubelet[2258]: I0702 00:34:35.096537 2258 topology_manager.go:215] "Topology Admit Handler" podUID="e7fda06e4a8236895e2ddfb8de4d7d2d" podNamespace="kube-system" podName="kube-controller-manager-ci-3975-1-1-4-69569a1933.novalocal" Jul 2 00:34:35.098860 kubelet[2258]: I0702 00:34:35.098827 2258 topology_manager.go:215] "Topology Admit Handler" podUID="52d14f95b0971e3fbedbc96ff5041f3c" podNamespace="kube-system" podName="kube-scheduler-ci-3975-1-1-4-69569a1933.novalocal" Jul 2 00:34:35.113506 systemd[1]: Created slice kubepods-burstable-pod2fca3c62f0226648cd3a1bf1a935102f.slice - libcontainer container kubepods-burstable-pod2fca3c62f0226648cd3a1bf1a935102f.slice. Jul 2 00:34:35.135361 systemd[1]: Created slice kubepods-burstable-pode7fda06e4a8236895e2ddfb8de4d7d2d.slice - libcontainer container kubepods-burstable-pode7fda06e4a8236895e2ddfb8de4d7d2d.slice. Jul 2 00:34:35.144985 systemd[1]: Created slice kubepods-burstable-pod52d14f95b0971e3fbedbc96ff5041f3c.slice - libcontainer container kubepods-burstable-pod52d14f95b0971e3fbedbc96ff5041f3c.slice. Jul 2 00:34:35.156353 kubelet[2258]: I0702 00:34:35.156265 2258 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/52d14f95b0971e3fbedbc96ff5041f3c-kubeconfig\") pod \"kube-scheduler-ci-3975-1-1-4-69569a1933.novalocal\" (UID: \"52d14f95b0971e3fbedbc96ff5041f3c\") " pod="kube-system/kube-scheduler-ci-3975-1-1-4-69569a1933.novalocal" Jul 2 00:34:35.156932 kubelet[2258]: I0702 00:34:35.156665 2258 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2fca3c62f0226648cd3a1bf1a935102f-k8s-certs\") pod \"kube-apiserver-ci-3975-1-1-4-69569a1933.novalocal\" (UID: \"2fca3c62f0226648cd3a1bf1a935102f\") " pod="kube-system/kube-apiserver-ci-3975-1-1-4-69569a1933.novalocal" Jul 2 00:34:35.156932 kubelet[2258]: I0702 00:34:35.156865 2258 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2fca3c62f0226648cd3a1bf1a935102f-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3975-1-1-4-69569a1933.novalocal\" (UID: \"2fca3c62f0226648cd3a1bf1a935102f\") " pod="kube-system/kube-apiserver-ci-3975-1-1-4-69569a1933.novalocal" Jul 2 00:34:35.157457 kubelet[2258]: I0702 00:34:35.157184 2258 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e7fda06e4a8236895e2ddfb8de4d7d2d-k8s-certs\") pod \"kube-controller-manager-ci-3975-1-1-4-69569a1933.novalocal\" (UID: \"e7fda06e4a8236895e2ddfb8de4d7d2d\") " pod="kube-system/kube-controller-manager-ci-3975-1-1-4-69569a1933.novalocal" Jul 2 00:34:35.157800 kubelet[2258]: I0702 00:34:35.157375 2258 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e7fda06e4a8236895e2ddfb8de4d7d2d-kubeconfig\") pod \"kube-controller-manager-ci-3975-1-1-4-69569a1933.novalocal\" (UID: \"e7fda06e4a8236895e2ddfb8de4d7d2d\") " pod="kube-system/kube-controller-manager-ci-3975-1-1-4-69569a1933.novalocal" Jul 2 00:34:35.158086 kubelet[2258]: I0702 00:34:35.157957 2258 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e7fda06e4a8236895e2ddfb8de4d7d2d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3975-1-1-4-69569a1933.novalocal\" (UID: \"e7fda06e4a8236895e2ddfb8de4d7d2d\") " pod="kube-system/kube-controller-manager-ci-3975-1-1-4-69569a1933.novalocal" Jul 2 00:34:35.158388 kubelet[2258]: I0702 00:34:35.158221 2258 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e7fda06e4a8236895e2ddfb8de4d7d2d-ca-certs\") pod \"kube-controller-manager-ci-3975-1-1-4-69569a1933.novalocal\" (UID: \"e7fda06e4a8236895e2ddfb8de4d7d2d\") " pod="kube-system/kube-controller-manager-ci-3975-1-1-4-69569a1933.novalocal" Jul 2 00:34:35.158847 kubelet[2258]: I0702 00:34:35.158626 2258 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e7fda06e4a8236895e2ddfb8de4d7d2d-flexvolume-dir\") pod \"kube-controller-manager-ci-3975-1-1-4-69569a1933.novalocal\" (UID: \"e7fda06e4a8236895e2ddfb8de4d7d2d\") " pod="kube-system/kube-controller-manager-ci-3975-1-1-4-69569a1933.novalocal" Jul 2 00:34:35.158847 kubelet[2258]: I0702 00:34:35.158740 2258 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2fca3c62f0226648cd3a1bf1a935102f-ca-certs\") pod \"kube-apiserver-ci-3975-1-1-4-69569a1933.novalocal\" (UID: \"2fca3c62f0226648cd3a1bf1a935102f\") " pod="kube-system/kube-apiserver-ci-3975-1-1-4-69569a1933.novalocal" Jul 2 00:34:35.365148 kubelet[2258]: E0702 00:34:35.364892 2258 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975-1-1-4-69569a1933.novalocal?timeout=10s\": dial tcp 172.24.4.39:6443: connect: connection refused" interval="800ms" Jul 2 00:34:35.427568 containerd[1444]: time="2024-07-02T00:34:35.427496936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3975-1-1-4-69569a1933.novalocal,Uid:2fca3c62f0226648cd3a1bf1a935102f,Namespace:kube-system,Attempt:0,}" Jul 2 00:34:35.442329 containerd[1444]: time="2024-07-02T00:34:35.441779321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3975-1-1-4-69569a1933.novalocal,Uid:e7fda06e4a8236895e2ddfb8de4d7d2d,Namespace:kube-system,Attempt:0,}" Jul 2 00:34:35.451175 containerd[1444]: time="2024-07-02T00:34:35.450808690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3975-1-1-4-69569a1933.novalocal,Uid:52d14f95b0971e3fbedbc96ff5041f3c,Namespace:kube-system,Attempt:0,}" Jul 2 00:34:35.484805 kubelet[2258]: I0702 00:34:35.484744 2258 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975-1-1-4-69569a1933.novalocal" Jul 2 00:34:35.487803 kubelet[2258]: E0702 00:34:35.487711 2258 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.39:6443/api/v1/nodes\": dial tcp 172.24.4.39:6443: connect: connection refused" node="ci-3975-1-1-4-69569a1933.novalocal" Jul 2 00:34:35.806998 kubelet[2258]: W0702 00:34:35.805972 2258 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.24.4.39:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.39:6443: connect: connection refused Jul 2 00:34:35.806998 kubelet[2258]: E0702 00:34:35.806170 2258 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.39:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.39:6443: connect: connection refused Jul 2 00:34:36.062803 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2474509615.mount: Deactivated successfully. Jul 2 00:34:36.067840 kubelet[2258]: W0702 00:34:36.067634 2258 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.24.4.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975-1-1-4-69569a1933.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.39:6443: connect: connection refused Jul 2 00:34:36.067840 kubelet[2258]: E0702 00:34:36.067794 2258 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975-1-1-4-69569a1933.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.39:6443: connect: connection refused Jul 2 00:34:36.076160 containerd[1444]: time="2024-07-02T00:34:36.075997168Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:34:36.080669 containerd[1444]: time="2024-07-02T00:34:36.080278169Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jul 2 00:34:36.082463 containerd[1444]: time="2024-07-02T00:34:36.082376917Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:34:36.083166 kubelet[2258]: W0702 00:34:36.082835 2258 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.24.4.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.39:6443: connect: connection refused Jul 2 00:34:36.083166 kubelet[2258]: E0702 00:34:36.082978 2258 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.39:6443: connect: connection refused Jul 2 00:34:36.085564 containerd[1444]: time="2024-07-02T00:34:36.085312782Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 00:34:36.087611 containerd[1444]: time="2024-07-02T00:34:36.087453297Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:34:36.090497 containerd[1444]: time="2024-07-02T00:34:36.090340531Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 00:34:36.097152 containerd[1444]: time="2024-07-02T00:34:36.096858519Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:34:36.099135 containerd[1444]: time="2024-07-02T00:34:36.098440391Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:34:36.102734 containerd[1444]: time="2024-07-02T00:34:36.101835882Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 650.869961ms" Jul 2 00:34:36.105016 containerd[1444]: time="2024-07-02T00:34:36.104975235Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 677.316739ms" Jul 2 00:34:36.119221 containerd[1444]: time="2024-07-02T00:34:36.119160316Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 677.203269ms" Jul 2 00:34:36.165781 kubelet[2258]: E0702 00:34:36.165750 2258 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975-1-1-4-69569a1933.novalocal?timeout=10s\": dial tcp 172.24.4.39:6443: connect: connection refused" interval="1.6s" Jul 2 00:34:36.295441 kubelet[2258]: I0702 00:34:36.294641 2258 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975-1-1-4-69569a1933.novalocal" Jul 2 00:34:36.297301 kubelet[2258]: E0702 00:34:36.297270 2258 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.39:6443/api/v1/nodes\": dial tcp 172.24.4.39:6443: connect: connection refused" node="ci-3975-1-1-4-69569a1933.novalocal" Jul 2 00:34:36.320580 kubelet[2258]: W0702 00:34:36.320353 2258 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.24.4.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.39:6443: connect: connection refused Jul 2 00:34:36.320580 kubelet[2258]: E0702 00:34:36.320417 2258 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.39:6443: connect: connection refused Jul 2 00:34:36.352658 containerd[1444]: time="2024-07-02T00:34:36.349803276Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:34:36.352658 containerd[1444]: time="2024-07-02T00:34:36.349861929Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:34:36.352658 containerd[1444]: time="2024-07-02T00:34:36.349879957Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:34:36.352658 containerd[1444]: time="2024-07-02T00:34:36.349893044Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:34:36.355521 containerd[1444]: time="2024-07-02T00:34:36.355240875Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:34:36.355521 containerd[1444]: time="2024-07-02T00:34:36.355356117Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:34:36.355521 containerd[1444]: time="2024-07-02T00:34:36.355394708Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:34:36.355521 containerd[1444]: time="2024-07-02T00:34:36.355421174Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:34:36.362121 containerd[1444]: time="2024-07-02T00:34:36.354142599Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:34:36.362121 containerd[1444]: time="2024-07-02T00:34:36.358137007Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:34:36.362121 containerd[1444]: time="2024-07-02T00:34:36.358174336Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:34:36.362121 containerd[1444]: time="2024-07-02T00:34:36.358188235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:34:36.391483 systemd[1]: Started cri-containerd-652c3a94f52b3bc787dd9faca96c0d0482982d1b0a885fe8c7d903a38c2a4e53.scope - libcontainer container 652c3a94f52b3bc787dd9faca96c0d0482982d1b0a885fe8c7d903a38c2a4e53. Jul 2 00:34:36.397462 systemd[1]: Started cri-containerd-9a4e5db2ee8f5b75665dd233ea46e56b6370a3db8dabaf5400f7eee47fd4b3f3.scope - libcontainer container 9a4e5db2ee8f5b75665dd233ea46e56b6370a3db8dabaf5400f7eee47fd4b3f3. Jul 2 00:34:36.399302 systemd[1]: Started cri-containerd-a42aff21db989d672c5be1347289a3f8fa5aead45bfbee59f82b79a447233832.scope - libcontainer container a42aff21db989d672c5be1347289a3f8fa5aead45bfbee59f82b79a447233832. Jul 2 00:34:36.475543 containerd[1444]: time="2024-07-02T00:34:36.475462181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3975-1-1-4-69569a1933.novalocal,Uid:e7fda06e4a8236895e2ddfb8de4d7d2d,Namespace:kube-system,Attempt:0,} returns sandbox id \"9a4e5db2ee8f5b75665dd233ea46e56b6370a3db8dabaf5400f7eee47fd4b3f3\"" Jul 2 00:34:36.486952 containerd[1444]: time="2024-07-02T00:34:36.486449165Z" level=info msg="CreateContainer within sandbox \"9a4e5db2ee8f5b75665dd233ea46e56b6370a3db8dabaf5400f7eee47fd4b3f3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 2 00:34:36.490604 containerd[1444]: time="2024-07-02T00:34:36.489891364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3975-1-1-4-69569a1933.novalocal,Uid:2fca3c62f0226648cd3a1bf1a935102f,Namespace:kube-system,Attempt:0,} returns sandbox id \"a42aff21db989d672c5be1347289a3f8fa5aead45bfbee59f82b79a447233832\"" Jul 2 00:34:36.493379 containerd[1444]: time="2024-07-02T00:34:36.493335197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3975-1-1-4-69569a1933.novalocal,Uid:52d14f95b0971e3fbedbc96ff5041f3c,Namespace:kube-system,Attempt:0,} returns sandbox id \"652c3a94f52b3bc787dd9faca96c0d0482982d1b0a885fe8c7d903a38c2a4e53\"" Jul 2 00:34:36.495276 containerd[1444]: time="2024-07-02T00:34:36.495253915Z" level=info msg="CreateContainer within sandbox \"a42aff21db989d672c5be1347289a3f8fa5aead45bfbee59f82b79a447233832\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 2 00:34:36.498270 containerd[1444]: time="2024-07-02T00:34:36.498233653Z" level=info msg="CreateContainer within sandbox \"652c3a94f52b3bc787dd9faca96c0d0482982d1b0a885fe8c7d903a38c2a4e53\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 2 00:34:36.529534 containerd[1444]: time="2024-07-02T00:34:36.529492179Z" level=info msg="CreateContainer within sandbox \"9a4e5db2ee8f5b75665dd233ea46e56b6370a3db8dabaf5400f7eee47fd4b3f3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7d6fcf851e63c27acaec6bdf87ec1b9e275b66ad41f2543842fb1d6f618b6e40\"" Jul 2 00:34:36.530616 containerd[1444]: time="2024-07-02T00:34:36.530582359Z" level=info msg="StartContainer for \"7d6fcf851e63c27acaec6bdf87ec1b9e275b66ad41f2543842fb1d6f618b6e40\"" Jul 2 00:34:36.533860 containerd[1444]: time="2024-07-02T00:34:36.533787420Z" level=info msg="CreateContainer within sandbox \"a42aff21db989d672c5be1347289a3f8fa5aead45bfbee59f82b79a447233832\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3dca02337ebd007e2c9cbfb29ced44d9ccae5d05cf865411e27be165061ecaf1\"" Jul 2 00:34:36.534508 containerd[1444]: time="2024-07-02T00:34:36.534338727Z" level=info msg="StartContainer for \"3dca02337ebd007e2c9cbfb29ced44d9ccae5d05cf865411e27be165061ecaf1\"" Jul 2 00:34:36.535605 containerd[1444]: time="2024-07-02T00:34:36.535327433Z" level=info msg="CreateContainer within sandbox \"652c3a94f52b3bc787dd9faca96c0d0482982d1b0a885fe8c7d903a38c2a4e53\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"de4f0c3f0a19d2db1bf36a9ea38c45657ae66a54fa8a86f7cc8793361c477d41\"" Jul 2 00:34:36.535963 containerd[1444]: time="2024-07-02T00:34:36.535937485Z" level=info msg="StartContainer for \"de4f0c3f0a19d2db1bf36a9ea38c45657ae66a54fa8a86f7cc8793361c477d41\"" Jul 2 00:34:36.574221 systemd[1]: Started cri-containerd-7d6fcf851e63c27acaec6bdf87ec1b9e275b66ad41f2543842fb1d6f618b6e40.scope - libcontainer container 7d6fcf851e63c27acaec6bdf87ec1b9e275b66ad41f2543842fb1d6f618b6e40. Jul 2 00:34:36.596473 systemd[1]: Started cri-containerd-de4f0c3f0a19d2db1bf36a9ea38c45657ae66a54fa8a86f7cc8793361c477d41.scope - libcontainer container de4f0c3f0a19d2db1bf36a9ea38c45657ae66a54fa8a86f7cc8793361c477d41. Jul 2 00:34:36.609602 systemd[1]: Started cri-containerd-3dca02337ebd007e2c9cbfb29ced44d9ccae5d05cf865411e27be165061ecaf1.scope - libcontainer container 3dca02337ebd007e2c9cbfb29ced44d9ccae5d05cf865411e27be165061ecaf1. Jul 2 00:34:36.668581 containerd[1444]: time="2024-07-02T00:34:36.668530472Z" level=info msg="StartContainer for \"7d6fcf851e63c27acaec6bdf87ec1b9e275b66ad41f2543842fb1d6f618b6e40\" returns successfully" Jul 2 00:34:36.674682 containerd[1444]: time="2024-07-02T00:34:36.674633098Z" level=info msg="StartContainer for \"de4f0c3f0a19d2db1bf36a9ea38c45657ae66a54fa8a86f7cc8793361c477d41\" returns successfully" Jul 2 00:34:36.699821 containerd[1444]: time="2024-07-02T00:34:36.699755287Z" level=info msg="StartContainer for \"3dca02337ebd007e2c9cbfb29ced44d9ccae5d05cf865411e27be165061ecaf1\" returns successfully" Jul 2 00:34:36.707093 kubelet[2258]: E0702 00:34:36.705542 2258 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.39:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.39:6443: connect: connection refused Jul 2 00:34:37.899902 kubelet[2258]: I0702 00:34:37.899532 2258 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975-1-1-4-69569a1933.novalocal" Jul 2 00:34:39.026911 kubelet[2258]: E0702 00:34:39.026797 2258 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3975-1-1-4-69569a1933.novalocal\" not found" node="ci-3975-1-1-4-69569a1933.novalocal" Jul 2 00:34:39.041898 kubelet[2258]: I0702 00:34:39.041859 2258 kubelet_node_status.go:76] "Successfully registered node" node="ci-3975-1-1-4-69569a1933.novalocal" Jul 2 00:34:39.107111 kubelet[2258]: E0702 00:34:39.106585 2258 event.go:346] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-3975-1-1-4-69569a1933.novalocal.17de3e2def6eafe8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3975-1-1-4-69569a1933.novalocal,UID:ci-3975-1-1-4-69569a1933.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3975-1-1-4-69569a1933.novalocal,},FirstTimestamp:2024-07-02 00:34:34.713567208 +0000 UTC m=+1.505165411,LastTimestamp:2024-07-02 00:34:34.713567208 +0000 UTC m=+1.505165411,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3975-1-1-4-69569a1933.novalocal,}" Jul 2 00:34:39.700874 kubelet[2258]: I0702 00:34:39.700794 2258 apiserver.go:52] "Watching apiserver" Jul 2 00:34:39.754707 kubelet[2258]: I0702 00:34:39.754483 2258 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 00:34:41.890155 kubelet[2258]: W0702 00:34:41.889956 2258 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 00:34:42.249222 systemd[1]: Reloading requested from client PID 2531 ('systemctl') (unit session-11.scope)... Jul 2 00:34:42.249241 systemd[1]: Reloading... Jul 2 00:34:42.385147 zram_generator::config[2565]: No configuration found. Jul 2 00:34:42.565630 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:34:42.672596 systemd[1]: Reloading finished in 422 ms. Jul 2 00:34:42.713784 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:34:42.714048 kubelet[2258]: I0702 00:34:42.713761 2258 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 00:34:42.727343 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 00:34:42.727643 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:34:42.727719 systemd[1]: kubelet.service: Consumed 1.934s CPU time, 107.9M memory peak, 0B memory swap peak. Jul 2 00:34:42.732295 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:34:43.088356 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:34:43.101652 (kubelet)[2632]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 2 00:34:43.248934 kubelet[2632]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:34:43.251609 kubelet[2632]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 00:34:43.251715 kubelet[2632]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:34:43.251917 kubelet[2632]: I0702 00:34:43.251878 2632 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 00:34:43.264403 kubelet[2632]: I0702 00:34:43.264377 2632 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jul 2 00:34:43.265122 kubelet[2632]: I0702 00:34:43.264535 2632 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 00:34:43.265122 kubelet[2632]: I0702 00:34:43.264778 2632 server.go:919] "Client rotation is on, will bootstrap in background" Jul 2 00:34:43.266774 kubelet[2632]: I0702 00:34:43.266750 2632 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 2 00:34:43.281439 kubelet[2632]: I0702 00:34:43.281414 2632 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 00:34:43.288745 kubelet[2632]: I0702 00:34:43.288703 2632 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 00:34:43.288932 kubelet[2632]: I0702 00:34:43.288912 2632 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 00:34:43.289176 kubelet[2632]: I0702 00:34:43.289154 2632 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 00:34:43.289277 kubelet[2632]: I0702 00:34:43.289184 2632 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 00:34:43.289277 kubelet[2632]: I0702 00:34:43.289197 2632 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 00:34:43.289277 kubelet[2632]: I0702 00:34:43.289234 2632 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:34:43.289373 kubelet[2632]: I0702 00:34:43.289318 2632 kubelet.go:396] "Attempting to sync node with API server" Jul 2 00:34:43.289373 kubelet[2632]: I0702 00:34:43.289334 2632 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 00:34:43.289373 kubelet[2632]: I0702 00:34:43.289356 2632 kubelet.go:312] "Adding apiserver pod source" Jul 2 00:34:43.289373 kubelet[2632]: I0702 00:34:43.289367 2632 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 00:34:43.290746 kubelet[2632]: I0702 00:34:43.290730 2632 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Jul 2 00:34:43.291048 kubelet[2632]: I0702 00:34:43.291021 2632 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 00:34:43.291643 kubelet[2632]: I0702 00:34:43.291630 2632 server.go:1256] "Started kubelet" Jul 2 00:34:43.307283 kubelet[2632]: I0702 00:34:43.307254 2632 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 00:34:43.329350 kubelet[2632]: I0702 00:34:43.329315 2632 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 00:34:43.331270 kubelet[2632]: I0702 00:34:43.330330 2632 server.go:461] "Adding debug handlers to kubelet server" Jul 2 00:34:43.336078 kubelet[2632]: I0702 00:34:43.333519 2632 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 00:34:43.336078 kubelet[2632]: I0702 00:34:43.333679 2632 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 00:34:43.336909 kubelet[2632]: I0702 00:34:43.336886 2632 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 00:34:43.337116 kubelet[2632]: I0702 00:34:43.337103 2632 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 00:34:43.337357 kubelet[2632]: I0702 00:34:43.337343 2632 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 00:34:43.343537 kubelet[2632]: I0702 00:34:43.343453 2632 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 00:34:43.345154 kubelet[2632]: I0702 00:34:43.345139 2632 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 00:34:43.346106 kubelet[2632]: I0702 00:34:43.345250 2632 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 00:34:43.346196 kubelet[2632]: I0702 00:34:43.346184 2632 kubelet.go:2329] "Starting kubelet main sync loop" Jul 2 00:34:43.346299 kubelet[2632]: E0702 00:34:43.346288 2632 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 00:34:43.348224 kubelet[2632]: I0702 00:34:43.348138 2632 factory.go:221] Registration of the systemd container factory successfully Jul 2 00:34:43.348281 kubelet[2632]: I0702 00:34:43.348244 2632 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 00:34:43.356073 kubelet[2632]: E0702 00:34:43.354527 2632 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 00:34:43.359089 kubelet[2632]: I0702 00:34:43.357004 2632 factory.go:221] Registration of the containerd container factory successfully Jul 2 00:34:43.418021 kubelet[2632]: I0702 00:34:43.417999 2632 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 00:34:43.418340 kubelet[2632]: I0702 00:34:43.418286 2632 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 00:34:43.418454 kubelet[2632]: I0702 00:34:43.418435 2632 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:34:43.418698 kubelet[2632]: I0702 00:34:43.418685 2632 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 2 00:34:43.418805 kubelet[2632]: I0702 00:34:43.418796 2632 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 2 00:34:43.418861 kubelet[2632]: I0702 00:34:43.418853 2632 policy_none.go:49] "None policy: Start" Jul 2 00:34:43.421745 kubelet[2632]: I0702 00:34:43.421729 2632 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 00:34:43.421944 kubelet[2632]: I0702 00:34:43.421933 2632 state_mem.go:35] "Initializing new in-memory state store" Jul 2 00:34:43.422451 kubelet[2632]: I0702 00:34:43.422436 2632 state_mem.go:75] "Updated machine memory state" Jul 2 00:34:43.428577 kubelet[2632]: I0702 00:34:43.428557 2632 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 00:34:43.429586 kubelet[2632]: I0702 00:34:43.429391 2632 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 00:34:43.447501 kubelet[2632]: I0702 00:34:43.447226 2632 topology_manager.go:215] "Topology Admit Handler" podUID="e7fda06e4a8236895e2ddfb8de4d7d2d" podNamespace="kube-system" podName="kube-controller-manager-ci-3975-1-1-4-69569a1933.novalocal" Jul 2 00:34:43.447501 kubelet[2632]: I0702 00:34:43.447310 2632 topology_manager.go:215] "Topology Admit Handler" podUID="52d14f95b0971e3fbedbc96ff5041f3c" podNamespace="kube-system" podName="kube-scheduler-ci-3975-1-1-4-69569a1933.novalocal" Jul 2 00:34:43.447501 kubelet[2632]: I0702 00:34:43.447368 2632 topology_manager.go:215] "Topology Admit Handler" podUID="2fca3c62f0226648cd3a1bf1a935102f" podNamespace="kube-system" podName="kube-apiserver-ci-3975-1-1-4-69569a1933.novalocal" Jul 2 00:34:43.456674 kubelet[2632]: W0702 00:34:43.456635 2632 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 00:34:43.458080 kubelet[2632]: E0702 00:34:43.457247 2632 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3975-1-1-4-69569a1933.novalocal\" already exists" pod="kube-system/kube-apiserver-ci-3975-1-1-4-69569a1933.novalocal" Jul 2 00:34:43.459083 kubelet[2632]: W0702 00:34:43.458972 2632 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 00:34:43.459710 kubelet[2632]: W0702 00:34:43.459028 2632 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 00:34:43.538873 kubelet[2632]: I0702 00:34:43.538712 2632 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e7fda06e4a8236895e2ddfb8de4d7d2d-flexvolume-dir\") pod \"kube-controller-manager-ci-3975-1-1-4-69569a1933.novalocal\" (UID: \"e7fda06e4a8236895e2ddfb8de4d7d2d\") " pod="kube-system/kube-controller-manager-ci-3975-1-1-4-69569a1933.novalocal" Jul 2 00:34:43.538873 kubelet[2632]: I0702 00:34:43.538820 2632 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e7fda06e4a8236895e2ddfb8de4d7d2d-k8s-certs\") pod \"kube-controller-manager-ci-3975-1-1-4-69569a1933.novalocal\" (UID: \"e7fda06e4a8236895e2ddfb8de4d7d2d\") " pod="kube-system/kube-controller-manager-ci-3975-1-1-4-69569a1933.novalocal" Jul 2 00:34:43.538873 kubelet[2632]: I0702 00:34:43.538873 2632 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e7fda06e4a8236895e2ddfb8de4d7d2d-kubeconfig\") pod \"kube-controller-manager-ci-3975-1-1-4-69569a1933.novalocal\" (UID: \"e7fda06e4a8236895e2ddfb8de4d7d2d\") " pod="kube-system/kube-controller-manager-ci-3975-1-1-4-69569a1933.novalocal" Jul 2 00:34:43.539197 kubelet[2632]: I0702 00:34:43.538926 2632 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e7fda06e4a8236895e2ddfb8de4d7d2d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3975-1-1-4-69569a1933.novalocal\" (UID: \"e7fda06e4a8236895e2ddfb8de4d7d2d\") " pod="kube-system/kube-controller-manager-ci-3975-1-1-4-69569a1933.novalocal" Jul 2 00:34:43.539197 kubelet[2632]: I0702 00:34:43.538980 2632 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/52d14f95b0971e3fbedbc96ff5041f3c-kubeconfig\") pod \"kube-scheduler-ci-3975-1-1-4-69569a1933.novalocal\" (UID: \"52d14f95b0971e3fbedbc96ff5041f3c\") " pod="kube-system/kube-scheduler-ci-3975-1-1-4-69569a1933.novalocal" Jul 2 00:34:43.539197 kubelet[2632]: I0702 00:34:43.539029 2632 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2fca3c62f0226648cd3a1bf1a935102f-ca-certs\") pod \"kube-apiserver-ci-3975-1-1-4-69569a1933.novalocal\" (UID: \"2fca3c62f0226648cd3a1bf1a935102f\") " pod="kube-system/kube-apiserver-ci-3975-1-1-4-69569a1933.novalocal" Jul 2 00:34:43.539197 kubelet[2632]: I0702 00:34:43.539106 2632 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2fca3c62f0226648cd3a1bf1a935102f-k8s-certs\") pod \"kube-apiserver-ci-3975-1-1-4-69569a1933.novalocal\" (UID: \"2fca3c62f0226648cd3a1bf1a935102f\") " pod="kube-system/kube-apiserver-ci-3975-1-1-4-69569a1933.novalocal" Jul 2 00:34:43.539385 kubelet[2632]: I0702 00:34:43.539159 2632 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2fca3c62f0226648cd3a1bf1a935102f-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3975-1-1-4-69569a1933.novalocal\" (UID: \"2fca3c62f0226648cd3a1bf1a935102f\") " pod="kube-system/kube-apiserver-ci-3975-1-1-4-69569a1933.novalocal" Jul 2 00:34:43.539385 kubelet[2632]: I0702 00:34:43.539202 2632 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e7fda06e4a8236895e2ddfb8de4d7d2d-ca-certs\") pod \"kube-controller-manager-ci-3975-1-1-4-69569a1933.novalocal\" (UID: \"e7fda06e4a8236895e2ddfb8de4d7d2d\") " pod="kube-system/kube-controller-manager-ci-3975-1-1-4-69569a1933.novalocal" Jul 2 00:34:43.541215 kubelet[2632]: I0702 00:34:43.540868 2632 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975-1-1-4-69569a1933.novalocal" Jul 2 00:34:43.552442 kubelet[2632]: I0702 00:34:43.551373 2632 kubelet_node_status.go:112] "Node was previously registered" node="ci-3975-1-1-4-69569a1933.novalocal" Jul 2 00:34:43.552442 kubelet[2632]: I0702 00:34:43.551459 2632 kubelet_node_status.go:76] "Successfully registered node" node="ci-3975-1-1-4-69569a1933.novalocal" Jul 2 00:34:44.291127 kubelet[2632]: I0702 00:34:44.290864 2632 apiserver.go:52] "Watching apiserver" Jul 2 00:34:44.337939 kubelet[2632]: I0702 00:34:44.337885 2632 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 00:34:44.353754 kubelet[2632]: I0702 00:34:44.353430 2632 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3975-1-1-4-69569a1933.novalocal" podStartSLOduration=3.353373206 podStartE2EDuration="3.353373206s" podCreationTimestamp="2024-07-02 00:34:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:34:44.342692776 +0000 UTC m=+1.225936723" watchObservedRunningTime="2024-07-02 00:34:44.353373206 +0000 UTC m=+1.236617164" Jul 2 00:34:44.353754 kubelet[2632]: I0702 00:34:44.353537 2632 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3975-1-1-4-69569a1933.novalocal" podStartSLOduration=1.353507932 podStartE2EDuration="1.353507932s" podCreationTimestamp="2024-07-02 00:34:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:34:44.351025227 +0000 UTC m=+1.234269184" watchObservedRunningTime="2024-07-02 00:34:44.353507932 +0000 UTC m=+1.236751879" Jul 2 00:34:44.373907 kubelet[2632]: I0702 00:34:44.373862 2632 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3975-1-1-4-69569a1933.novalocal" podStartSLOduration=1.373808382 podStartE2EDuration="1.373808382s" podCreationTimestamp="2024-07-02 00:34:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:34:44.360372716 +0000 UTC m=+1.243616673" watchObservedRunningTime="2024-07-02 00:34:44.373808382 +0000 UTC m=+1.257052339" Jul 2 00:34:49.249578 sudo[1701]: pam_unix(sudo:session): session closed for user root Jul 2 00:34:49.495642 sshd[1698]: pam_unix(sshd:session): session closed for user core Jul 2 00:34:49.502263 systemd[1]: sshd@8-172.24.4.39:22-172.24.4.1:54804.service: Deactivated successfully. Jul 2 00:34:49.507789 systemd[1]: session-11.scope: Deactivated successfully. Jul 2 00:34:49.508368 systemd[1]: session-11.scope: Consumed 8.194s CPU time, 136.2M memory peak, 0B memory swap peak. Jul 2 00:34:49.512662 systemd-logind[1432]: Session 11 logged out. Waiting for processes to exit. Jul 2 00:34:49.517534 systemd-logind[1432]: Removed session 11. Jul 2 00:34:56.829304 kubelet[2632]: I0702 00:34:56.829197 2632 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 2 00:34:56.830155 kubelet[2632]: I0702 00:34:56.829812 2632 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 2 00:34:56.830195 containerd[1444]: time="2024-07-02T00:34:56.829626327Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 2 00:34:56.875749 kubelet[2632]: I0702 00:34:56.875705 2632 topology_manager.go:215] "Topology Admit Handler" podUID="aeaaf138-d71e-4fd3-ab08-015eb797ffdf" podNamespace="kube-system" podName="kube-proxy-67lvm" Jul 2 00:34:56.889992 systemd[1]: Created slice kubepods-besteffort-podaeaaf138_d71e_4fd3_ab08_015eb797ffdf.slice - libcontainer container kubepods-besteffort-podaeaaf138_d71e_4fd3_ab08_015eb797ffdf.slice. Jul 2 00:34:57.017666 kubelet[2632]: I0702 00:34:57.017549 2632 topology_manager.go:215] "Topology Admit Handler" podUID="d945e044-b2cd-4820-807c-a3bf33607b8a" podNamespace="tigera-operator" podName="tigera-operator-76c4974c85-xggvb" Jul 2 00:34:57.021262 kubelet[2632]: W0702 00:34:57.021025 2632 reflector.go:539] object-"tigera-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-3975-1-1-4-69569a1933.novalocal" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ci-3975-1-1-4-69569a1933.novalocal' and this object Jul 2 00:34:57.021262 kubelet[2632]: E0702 00:34:57.021111 2632 reflector.go:147] object-"tigera-operator"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-3975-1-1-4-69569a1933.novalocal" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ci-3975-1-1-4-69569a1933.novalocal' and this object Jul 2 00:34:57.028587 systemd[1]: Created slice kubepods-besteffort-podd945e044_b2cd_4820_807c_a3bf33607b8a.slice - libcontainer container kubepods-besteffort-podd945e044_b2cd_4820_807c_a3bf33607b8a.slice. Jul 2 00:34:57.035936 kubelet[2632]: I0702 00:34:57.034775 2632 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/aeaaf138-d71e-4fd3-ab08-015eb797ffdf-kube-proxy\") pod \"kube-proxy-67lvm\" (UID: \"aeaaf138-d71e-4fd3-ab08-015eb797ffdf\") " pod="kube-system/kube-proxy-67lvm" Jul 2 00:34:57.035936 kubelet[2632]: I0702 00:34:57.034939 2632 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aeaaf138-d71e-4fd3-ab08-015eb797ffdf-xtables-lock\") pod \"kube-proxy-67lvm\" (UID: \"aeaaf138-d71e-4fd3-ab08-015eb797ffdf\") " pod="kube-system/kube-proxy-67lvm" Jul 2 00:34:57.035936 kubelet[2632]: I0702 00:34:57.035009 2632 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aeaaf138-d71e-4fd3-ab08-015eb797ffdf-lib-modules\") pod \"kube-proxy-67lvm\" (UID: \"aeaaf138-d71e-4fd3-ab08-015eb797ffdf\") " pod="kube-system/kube-proxy-67lvm" Jul 2 00:34:57.035936 kubelet[2632]: I0702 00:34:57.035040 2632 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cbbv\" (UniqueName: \"kubernetes.io/projected/aeaaf138-d71e-4fd3-ab08-015eb797ffdf-kube-api-access-8cbbv\") pod \"kube-proxy-67lvm\" (UID: \"aeaaf138-d71e-4fd3-ab08-015eb797ffdf\") " pod="kube-system/kube-proxy-67lvm" Jul 2 00:34:57.136001 kubelet[2632]: I0702 00:34:57.135714 2632 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d945e044-b2cd-4820-807c-a3bf33607b8a-var-lib-calico\") pod \"tigera-operator-76c4974c85-xggvb\" (UID: \"d945e044-b2cd-4820-807c-a3bf33607b8a\") " pod="tigera-operator/tigera-operator-76c4974c85-xggvb" Jul 2 00:34:57.136001 kubelet[2632]: I0702 00:34:57.135850 2632 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kq9n2\" (UniqueName: \"kubernetes.io/projected/d945e044-b2cd-4820-807c-a3bf33607b8a-kube-api-access-kq9n2\") pod \"tigera-operator-76c4974c85-xggvb\" (UID: \"d945e044-b2cd-4820-807c-a3bf33607b8a\") " pod="tigera-operator/tigera-operator-76c4974c85-xggvb" Jul 2 00:34:57.212467 containerd[1444]: time="2024-07-02T00:34:57.211536688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-67lvm,Uid:aeaaf138-d71e-4fd3-ab08-015eb797ffdf,Namespace:kube-system,Attempt:0,}" Jul 2 00:34:57.299846 containerd[1444]: time="2024-07-02T00:34:57.299679467Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:34:57.299846 containerd[1444]: time="2024-07-02T00:34:57.299767254Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:34:57.300284 containerd[1444]: time="2024-07-02T00:34:57.299798647Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:34:57.300284 containerd[1444]: time="2024-07-02T00:34:57.299832164Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:34:57.337273 systemd[1]: Started cri-containerd-c785a0fdac63f74d79f9a42007fcbe8eb69595e2aeedbfc2194e47f69be04129.scope - libcontainer container c785a0fdac63f74d79f9a42007fcbe8eb69595e2aeedbfc2194e47f69be04129. Jul 2 00:34:57.365811 containerd[1444]: time="2024-07-02T00:34:57.365759931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-67lvm,Uid:aeaaf138-d71e-4fd3-ab08-015eb797ffdf,Namespace:kube-system,Attempt:0,} returns sandbox id \"c785a0fdac63f74d79f9a42007fcbe8eb69595e2aeedbfc2194e47f69be04129\"" Jul 2 00:34:57.375972 containerd[1444]: time="2024-07-02T00:34:57.375854415Z" level=info msg="CreateContainer within sandbox \"c785a0fdac63f74d79f9a42007fcbe8eb69595e2aeedbfc2194e47f69be04129\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 2 00:34:57.400448 containerd[1444]: time="2024-07-02T00:34:57.400309229Z" level=info msg="CreateContainer within sandbox \"c785a0fdac63f74d79f9a42007fcbe8eb69595e2aeedbfc2194e47f69be04129\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4d6ee03c2855d2b35dd4cf8ad0590dca158e250549b9ac1c36ce3eddb6a6dd25\"" Jul 2 00:34:57.401643 containerd[1444]: time="2024-07-02T00:34:57.401585989Z" level=info msg="StartContainer for \"4d6ee03c2855d2b35dd4cf8ad0590dca158e250549b9ac1c36ce3eddb6a6dd25\"" Jul 2 00:34:57.436239 systemd[1]: Started cri-containerd-4d6ee03c2855d2b35dd4cf8ad0590dca158e250549b9ac1c36ce3eddb6a6dd25.scope - libcontainer container 4d6ee03c2855d2b35dd4cf8ad0590dca158e250549b9ac1c36ce3eddb6a6dd25. Jul 2 00:34:57.475742 containerd[1444]: time="2024-07-02T00:34:57.475702910Z" level=info msg="StartContainer for \"4d6ee03c2855d2b35dd4cf8ad0590dca158e250549b9ac1c36ce3eddb6a6dd25\" returns successfully" Jul 2 00:34:58.176321 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4099462785.mount: Deactivated successfully. Jul 2 00:34:58.236800 containerd[1444]: time="2024-07-02T00:34:58.236666427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-xggvb,Uid:d945e044-b2cd-4820-807c-a3bf33607b8a,Namespace:tigera-operator,Attempt:0,}" Jul 2 00:34:58.286136 containerd[1444]: time="2024-07-02T00:34:58.285371544Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:34:58.286136 containerd[1444]: time="2024-07-02T00:34:58.285441664Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:34:58.286136 containerd[1444]: time="2024-07-02T00:34:58.285467948Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:34:58.286136 containerd[1444]: time="2024-07-02T00:34:58.285486084Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:34:58.317257 systemd[1]: Started cri-containerd-32f00dec037b42df916884b5cd174b95bea4b4865e36e4a658074207199da523.scope - libcontainer container 32f00dec037b42df916884b5cd174b95bea4b4865e36e4a658074207199da523. Jul 2 00:34:58.372921 containerd[1444]: time="2024-07-02T00:34:58.372648929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-xggvb,Uid:d945e044-b2cd-4820-807c-a3bf33607b8a,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"32f00dec037b42df916884b5cd174b95bea4b4865e36e4a658074207199da523\"" Jul 2 00:34:58.376376 containerd[1444]: time="2024-07-02T00:34:58.376055623Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\"" Jul 2 00:34:58.465721 kubelet[2632]: I0702 00:34:58.465434 2632 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-67lvm" podStartSLOduration=2.465388045 podStartE2EDuration="2.465388045s" podCreationTimestamp="2024-07-02 00:34:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:34:58.465166892 +0000 UTC m=+15.348410859" watchObservedRunningTime="2024-07-02 00:34:58.465388045 +0000 UTC m=+15.348631992" Jul 2 00:35:00.127111 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount239088958.mount: Deactivated successfully. Jul 2 00:35:00.952173 containerd[1444]: time="2024-07-02T00:35:00.952008401Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:35:00.954113 containerd[1444]: time="2024-07-02T00:35:00.953333481Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=22076052" Jul 2 00:35:00.955140 containerd[1444]: time="2024-07-02T00:35:00.954940204Z" level=info msg="ImageCreate event name:\"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:35:00.958624 containerd[1444]: time="2024-07-02T00:35:00.958530883Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:35:00.962653 containerd[1444]: time="2024-07-02T00:35:00.962551314Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.0\" with image id \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\", repo tag \"quay.io/tigera/operator:v1.34.0\", repo digest \"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\", size \"22070263\" in 2.586385549s" Jul 2 00:35:00.962653 containerd[1444]: time="2024-07-02T00:35:00.962607005Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\"" Jul 2 00:35:00.975166 containerd[1444]: time="2024-07-02T00:35:00.974801500Z" level=info msg="CreateContainer within sandbox \"32f00dec037b42df916884b5cd174b95bea4b4865e36e4a658074207199da523\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 2 00:35:01.006424 containerd[1444]: time="2024-07-02T00:35:01.006246231Z" level=info msg="CreateContainer within sandbox \"32f00dec037b42df916884b5cd174b95bea4b4865e36e4a658074207199da523\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"b42038c53fe7e7eb61076885751cdd38f9d0bfa206143a75c7a3896472834b50\"" Jul 2 00:35:01.007929 containerd[1444]: time="2024-07-02T00:35:01.007359636Z" level=info msg="StartContainer for \"b42038c53fe7e7eb61076885751cdd38f9d0bfa206143a75c7a3896472834b50\"" Jul 2 00:35:01.046248 systemd[1]: run-containerd-runc-k8s.io-b42038c53fe7e7eb61076885751cdd38f9d0bfa206143a75c7a3896472834b50-runc.GKeyA3.mount: Deactivated successfully. Jul 2 00:35:01.053246 systemd[1]: Started cri-containerd-b42038c53fe7e7eb61076885751cdd38f9d0bfa206143a75c7a3896472834b50.scope - libcontainer container b42038c53fe7e7eb61076885751cdd38f9d0bfa206143a75c7a3896472834b50. Jul 2 00:35:01.086245 containerd[1444]: time="2024-07-02T00:35:01.086198897Z" level=info msg="StartContainer for \"b42038c53fe7e7eb61076885751cdd38f9d0bfa206143a75c7a3896472834b50\" returns successfully" Jul 2 00:35:03.381555 kubelet[2632]: I0702 00:35:03.381358 2632 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4974c85-xggvb" podStartSLOduration=4.791761532 podStartE2EDuration="7.381264002s" podCreationTimestamp="2024-07-02 00:34:56 +0000 UTC" firstStartedPulling="2024-07-02 00:34:58.37399232 +0000 UTC m=+15.257236277" lastFinishedPulling="2024-07-02 00:35:00.96349476 +0000 UTC m=+17.846738747" observedRunningTime="2024-07-02 00:35:01.487644358 +0000 UTC m=+18.370888355" watchObservedRunningTime="2024-07-02 00:35:03.381264002 +0000 UTC m=+20.264507999" Jul 2 00:35:04.576594 kubelet[2632]: I0702 00:35:04.576515 2632 topology_manager.go:215] "Topology Admit Handler" podUID="483ea29d-8fe5-4d1c-a83f-169e2a40a5a5" podNamespace="calico-system" podName="calico-typha-89866bd7c-nbtfm" Jul 2 00:35:04.590111 kubelet[2632]: W0702 00:35:04.589282 2632 reflector.go:539] object-"calico-system"/"tigera-ca-bundle": failed to list *v1.ConfigMap: configmaps "tigera-ca-bundle" is forbidden: User "system:node:ci-3975-1-1-4-69569a1933.novalocal" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-3975-1-1-4-69569a1933.novalocal' and this object Jul 2 00:35:04.590111 kubelet[2632]: E0702 00:35:04.589337 2632 reflector.go:147] object-"calico-system"/"tigera-ca-bundle": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "tigera-ca-bundle" is forbidden: User "system:node:ci-3975-1-1-4-69569a1933.novalocal" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-3975-1-1-4-69569a1933.novalocal' and this object Jul 2 00:35:04.590111 kubelet[2632]: W0702 00:35:04.589418 2632 reflector.go:539] object-"calico-system"/"typha-certs": failed to list *v1.Secret: secrets "typha-certs" is forbidden: User "system:node:ci-3975-1-1-4-69569a1933.novalocal" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-3975-1-1-4-69569a1933.novalocal' and this object Jul 2 00:35:04.590111 kubelet[2632]: E0702 00:35:04.589446 2632 reflector.go:147] object-"calico-system"/"typha-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "typha-certs" is forbidden: User "system:node:ci-3975-1-1-4-69569a1933.novalocal" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-3975-1-1-4-69569a1933.novalocal' and this object Jul 2 00:35:04.591164 kubelet[2632]: W0702 00:35:04.590163 2632 reflector.go:539] object-"calico-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-3975-1-1-4-69569a1933.novalocal" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-3975-1-1-4-69569a1933.novalocal' and this object Jul 2 00:35:04.591164 kubelet[2632]: E0702 00:35:04.590301 2632 reflector.go:147] object-"calico-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-3975-1-1-4-69569a1933.novalocal" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-3975-1-1-4-69569a1933.novalocal' and this object Jul 2 00:35:04.596101 systemd[1]: Created slice kubepods-besteffort-pod483ea29d_8fe5_4d1c_a83f_169e2a40a5a5.slice - libcontainer container kubepods-besteffort-pod483ea29d_8fe5_4d1c_a83f_169e2a40a5a5.slice. Jul 2 00:35:04.685623 kubelet[2632]: I0702 00:35:04.683818 2632 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqzmz\" (UniqueName: \"kubernetes.io/projected/483ea29d-8fe5-4d1c-a83f-169e2a40a5a5-kube-api-access-qqzmz\") pod \"calico-typha-89866bd7c-nbtfm\" (UID: \"483ea29d-8fe5-4d1c-a83f-169e2a40a5a5\") " pod="calico-system/calico-typha-89866bd7c-nbtfm" Jul 2 00:35:04.685623 kubelet[2632]: I0702 00:35:04.683881 2632 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/483ea29d-8fe5-4d1c-a83f-169e2a40a5a5-typha-certs\") pod \"calico-typha-89866bd7c-nbtfm\" (UID: \"483ea29d-8fe5-4d1c-a83f-169e2a40a5a5\") " pod="calico-system/calico-typha-89866bd7c-nbtfm" Jul 2 00:35:04.685623 kubelet[2632]: I0702 00:35:04.683911 2632 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/483ea29d-8fe5-4d1c-a83f-169e2a40a5a5-tigera-ca-bundle\") pod \"calico-typha-89866bd7c-nbtfm\" (UID: \"483ea29d-8fe5-4d1c-a83f-169e2a40a5a5\") " pod="calico-system/calico-typha-89866bd7c-nbtfm" Jul 2 00:35:04.687403 kubelet[2632]: I0702 00:35:04.687346 2632 topology_manager.go:215] "Topology Admit Handler" podUID="daccc5c1-ca44-43cb-adf1-f0bb45d681bb" podNamespace="calico-system" podName="calico-node-4mjsk" Jul 2 00:35:04.700947 systemd[1]: Created slice kubepods-besteffort-poddaccc5c1_ca44_43cb_adf1_f0bb45d681bb.slice - libcontainer container kubepods-besteffort-poddaccc5c1_ca44_43cb_adf1_f0bb45d681bb.slice. Jul 2 00:35:04.785854 kubelet[2632]: I0702 00:35:04.784506 2632 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/daccc5c1-ca44-43cb-adf1-f0bb45d681bb-tigera-ca-bundle\") pod \"calico-node-4mjsk\" (UID: \"daccc5c1-ca44-43cb-adf1-f0bb45d681bb\") " pod="calico-system/calico-node-4mjsk" Jul 2 00:35:04.785854 kubelet[2632]: I0702 00:35:04.784576 2632 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/daccc5c1-ca44-43cb-adf1-f0bb45d681bb-node-certs\") pod \"calico-node-4mjsk\" (UID: \"daccc5c1-ca44-43cb-adf1-f0bb45d681bb\") " pod="calico-system/calico-node-4mjsk" Jul 2 00:35:04.785854 kubelet[2632]: I0702 00:35:04.784637 2632 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/daccc5c1-ca44-43cb-adf1-f0bb45d681bb-var-run-calico\") pod \"calico-node-4mjsk\" (UID: \"daccc5c1-ca44-43cb-adf1-f0bb45d681bb\") " pod="calico-system/calico-node-4mjsk" Jul 2 00:35:04.785854 kubelet[2632]: I0702 00:35:04.784677 2632 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/daccc5c1-ca44-43cb-adf1-f0bb45d681bb-var-lib-calico\") pod \"calico-node-4mjsk\" (UID: \"daccc5c1-ca44-43cb-adf1-f0bb45d681bb\") " pod="calico-system/calico-node-4mjsk" Jul 2 00:35:04.785854 kubelet[2632]: I0702 00:35:04.784711 2632 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/daccc5c1-ca44-43cb-adf1-f0bb45d681bb-cni-bin-dir\") pod \"calico-node-4mjsk\" (UID: \"daccc5c1-ca44-43cb-adf1-f0bb45d681bb\") " pod="calico-system/calico-node-4mjsk" Jul 2 00:35:04.786233 kubelet[2632]: I0702 00:35:04.784748 2632 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/daccc5c1-ca44-43cb-adf1-f0bb45d681bb-xtables-lock\") pod \"calico-node-4mjsk\" (UID: \"daccc5c1-ca44-43cb-adf1-f0bb45d681bb\") " pod="calico-system/calico-node-4mjsk" Jul 2 00:35:04.786233 kubelet[2632]: I0702 00:35:04.784781 2632 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/daccc5c1-ca44-43cb-adf1-f0bb45d681bb-policysync\") pod \"calico-node-4mjsk\" (UID: \"daccc5c1-ca44-43cb-adf1-f0bb45d681bb\") " pod="calico-system/calico-node-4mjsk" Jul 2 00:35:04.786233 kubelet[2632]: I0702 00:35:04.784835 2632 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/daccc5c1-ca44-43cb-adf1-f0bb45d681bb-flexvol-driver-host\") pod \"calico-node-4mjsk\" (UID: \"daccc5c1-ca44-43cb-adf1-f0bb45d681bb\") " pod="calico-system/calico-node-4mjsk" Jul 2 00:35:04.786233 kubelet[2632]: I0702 00:35:04.784870 2632 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ct57t\" (UniqueName: \"kubernetes.io/projected/daccc5c1-ca44-43cb-adf1-f0bb45d681bb-kube-api-access-ct57t\") pod \"calico-node-4mjsk\" (UID: \"daccc5c1-ca44-43cb-adf1-f0bb45d681bb\") " pod="calico-system/calico-node-4mjsk" Jul 2 00:35:04.786233 kubelet[2632]: I0702 00:35:04.784963 2632 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/daccc5c1-ca44-43cb-adf1-f0bb45d681bb-cni-net-dir\") pod \"calico-node-4mjsk\" (UID: \"daccc5c1-ca44-43cb-adf1-f0bb45d681bb\") " pod="calico-system/calico-node-4mjsk" Jul 2 00:35:04.786448 kubelet[2632]: I0702 00:35:04.784997 2632 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/daccc5c1-ca44-43cb-adf1-f0bb45d681bb-cni-log-dir\") pod \"calico-node-4mjsk\" (UID: \"daccc5c1-ca44-43cb-adf1-f0bb45d681bb\") " pod="calico-system/calico-node-4mjsk" Jul 2 00:35:04.786448 kubelet[2632]: I0702 00:35:04.785034 2632 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/daccc5c1-ca44-43cb-adf1-f0bb45d681bb-lib-modules\") pod \"calico-node-4mjsk\" (UID: \"daccc5c1-ca44-43cb-adf1-f0bb45d681bb\") " pod="calico-system/calico-node-4mjsk" Jul 2 00:35:04.809512 kubelet[2632]: I0702 00:35:04.809462 2632 topology_manager.go:215] "Topology Admit Handler" podUID="f825e197-24d6-43c1-8001-acbd6a4ca977" podNamespace="calico-system" podName="csi-node-driver-9ksc4" Jul 2 00:35:04.810570 kubelet[2632]: E0702 00:35:04.810526 2632 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9ksc4" podUID="f825e197-24d6-43c1-8001-acbd6a4ca977" Jul 2 00:35:04.887587 kubelet[2632]: I0702 00:35:04.887243 2632 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f825e197-24d6-43c1-8001-acbd6a4ca977-kubelet-dir\") pod \"csi-node-driver-9ksc4\" (UID: \"f825e197-24d6-43c1-8001-acbd6a4ca977\") " pod="calico-system/csi-node-driver-9ksc4" Jul 2 00:35:04.887587 kubelet[2632]: I0702 00:35:04.887290 2632 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f825e197-24d6-43c1-8001-acbd6a4ca977-registration-dir\") pod \"csi-node-driver-9ksc4\" (UID: \"f825e197-24d6-43c1-8001-acbd6a4ca977\") " pod="calico-system/csi-node-driver-9ksc4" Jul 2 00:35:04.887587 kubelet[2632]: I0702 00:35:04.887432 2632 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f825e197-24d6-43c1-8001-acbd6a4ca977-socket-dir\") pod \"csi-node-driver-9ksc4\" (UID: \"f825e197-24d6-43c1-8001-acbd6a4ca977\") " pod="calico-system/csi-node-driver-9ksc4" Jul 2 00:35:04.887587 kubelet[2632]: I0702 00:35:04.887479 2632 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vq99j\" (UniqueName: \"kubernetes.io/projected/f825e197-24d6-43c1-8001-acbd6a4ca977-kube-api-access-vq99j\") pod \"csi-node-driver-9ksc4\" (UID: \"f825e197-24d6-43c1-8001-acbd6a4ca977\") " pod="calico-system/csi-node-driver-9ksc4" Jul 2 00:35:04.887832 kubelet[2632]: I0702 00:35:04.887660 2632 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/f825e197-24d6-43c1-8001-acbd6a4ca977-varrun\") pod \"csi-node-driver-9ksc4\" (UID: \"f825e197-24d6-43c1-8001-acbd6a4ca977\") " pod="calico-system/csi-node-driver-9ksc4" Jul 2 00:35:04.896354 kubelet[2632]: E0702 00:35:04.894553 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:35:04.896354 kubelet[2632]: W0702 00:35:04.894601 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:35:04.896354 kubelet[2632]: E0702 00:35:04.894631 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:35:04.897176 kubelet[2632]: E0702 00:35:04.897015 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:35:04.897176 kubelet[2632]: W0702 00:35:04.897122 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:35:04.897255 kubelet[2632]: E0702 00:35:04.897175 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:35:04.908120 kubelet[2632]: E0702 00:35:04.908088 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:35:04.908120 kubelet[2632]: W0702 00:35:04.908108 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:35:04.908382 kubelet[2632]: E0702 00:35:04.908314 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:35:04.909188 kubelet[2632]: E0702 00:35:04.909132 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:35:04.909188 kubelet[2632]: W0702 00:35:04.909151 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:35:04.909188 kubelet[2632]: E0702 00:35:04.909172 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:35:04.909413 kubelet[2632]: E0702 00:35:04.909379 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:35:04.909413 kubelet[2632]: W0702 00:35:04.909395 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:35:04.909413 kubelet[2632]: E0702 00:35:04.909411 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:35:04.988460 kubelet[2632]: E0702 00:35:04.988432 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:35:04.988460 kubelet[2632]: W0702 00:35:04.988455 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:35:04.988460 kubelet[2632]: E0702 00:35:04.988480 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:35:04.989085 kubelet[2632]: E0702 00:35:04.988753 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:35:04.989085 kubelet[2632]: W0702 00:35:04.988797 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:35:04.989085 kubelet[2632]: E0702 00:35:04.988825 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:35:04.989338 kubelet[2632]: E0702 00:35:04.989118 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:35:04.989338 kubelet[2632]: W0702 00:35:04.989138 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:35:04.989338 kubelet[2632]: E0702 00:35:04.989207 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:35:04.989552 kubelet[2632]: E0702 00:35:04.989474 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:35:04.989552 kubelet[2632]: W0702 00:35:04.989541 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:35:04.990860 kubelet[2632]: E0702 00:35:04.989567 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:35:04.991242 kubelet[2632]: E0702 00:35:04.991028 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:35:04.991242 kubelet[2632]: W0702 00:35:04.991050 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:35:04.991242 kubelet[2632]: E0702 00:35:04.991103 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:35:04.991439 kubelet[2632]: E0702 00:35:04.991426 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:35:04.991504 kubelet[2632]: W0702 00:35:04.991493 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:35:04.991622 kubelet[2632]: E0702 00:35:04.991597 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:35:04.991882 kubelet[2632]: E0702 00:35:04.991869 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:35:04.992019 kubelet[2632]: W0702 00:35:04.991988 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:35:04.992272 kubelet[2632]: E0702 00:35:04.992197 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:35:04.992839 kubelet[2632]: E0702 00:35:04.992827 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:35:04.992984 kubelet[2632]: W0702 00:35:04.992897 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:35:04.992984 kubelet[2632]: E0702 00:35:04.992934 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:35:04.993140 kubelet[2632]: E0702 00:35:04.993128 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:35:04.993315 kubelet[2632]: W0702 00:35:04.993188 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:35:04.993315 kubelet[2632]: E0702 00:35:04.993223 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:35:04.994045 kubelet[2632]: E0702 00:35:04.994033 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:35:04.994238 kubelet[2632]: W0702 00:35:04.994131 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:35:04.994238 kubelet[2632]: E0702 00:35:04.994162 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:35:04.994470 kubelet[2632]: E0702 00:35:04.994396 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:35:04.994470 kubelet[2632]: W0702 00:35:04.994412 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:35:04.994470 kubelet[2632]: E0702 00:35:04.994455 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:35:04.994696 kubelet[2632]: E0702 00:35:04.994673 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:35:04.994739 kubelet[2632]: W0702 00:35:04.994700 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:35:04.994739 kubelet[2632]: E0702 00:35:04.994715 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:35:04.994947 kubelet[2632]: E0702 00:35:04.994910 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:35:04.994947 kubelet[2632]: W0702 00:35:04.994924 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:35:04.994947 kubelet[2632]: E0702 00:35:04.994938 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:35:04.996387 kubelet[2632]: E0702 00:35:04.996142 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:35:04.996387 kubelet[2632]: W0702 00:35:04.996384 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:35:04.996481 kubelet[2632]: E0702 00:35:04.996415 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:35:04.996709 kubelet[2632]: E0702 00:35:04.996691 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:35:04.996709 kubelet[2632]: W0702 00:35:04.996707 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:35:04.996988 kubelet[2632]: E0702 00:35:04.996783 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:35:04.997175 kubelet[2632]: E0702 00:35:04.997153 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:35:04.997175 kubelet[2632]: W0702 00:35:04.997169 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:35:04.997370 kubelet[2632]: E0702 00:35:04.997261 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:35:04.997370 kubelet[2632]: E0702 00:35:04.997367 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:35:04.997426 kubelet[2632]: W0702 00:35:04.997376 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:35:04.997668 kubelet[2632]: E0702 00:35:04.997650 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:35:04.997885 kubelet[2632]: E0702 00:35:04.997863 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:35:04.997885 kubelet[2632]: W0702 00:35:04.997879 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:35:04.997972 kubelet[2632]: E0702 00:35:04.997909 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:35:05.000078 kubelet[2632]: E0702 00:35:04.998157 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:35:05.000078 kubelet[2632]: W0702 00:35:04.998173 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:35:05.000078 kubelet[2632]: E0702 00:35:04.998268 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:35:05.000078 kubelet[2632]: E0702 00:35:04.998465 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:35:05.000078 kubelet[2632]: W0702 00:35:04.998474 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:35:05.000078 kubelet[2632]: E0702 00:35:04.998597 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:35:05.000078 kubelet[2632]: E0702 00:35:04.998718 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:35:05.000078 kubelet[2632]: W0702 00:35:04.998727 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:35:05.000078 kubelet[2632]: E0702 00:35:04.999219 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:35:05.000078 kubelet[2632]: E0702 00:35:04.999311 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:35:05.000345 kubelet[2632]: W0702 00:35:04.999319 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:35:05.000345 kubelet[2632]: E0702 00:35:04.999347 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:35:05.000345 kubelet[2632]: E0702 00:35:04.999516 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:35:05.000345 kubelet[2632]: W0702 00:35:04.999524 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:35:05.000345 kubelet[2632]: E0702 00:35:04.999549 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:35:05.000345 kubelet[2632]: E0702 00:35:04.999734 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:35:05.000345 kubelet[2632]: W0702 00:35:04.999742 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:35:05.000345 kubelet[2632]: E0702 00:35:04.999790 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:35:05.000345 kubelet[2632]: E0702 00:35:04.999993 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:35:05.000345 kubelet[2632]: W0702 00:35:05.000001 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:35:05.000646 kubelet[2632]: E0702 00:35:05.000016 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:35:05.000646 kubelet[2632]: E0702 00:35:05.000237 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:35:05.000646 kubelet[2632]: W0702 00:35:05.000245 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:35:05.000646 kubelet[2632]: E0702 00:35:05.000268 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:35:05.000646 kubelet[2632]: E0702 00:35:05.000416 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:35:05.000646 kubelet[2632]: W0702 00:35:05.000424 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:35:05.000646 kubelet[2632]: E0702 00:35:05.000435 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:35:05.000999 kubelet[2632]: E0702 00:35:05.000977 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:35:05.000999 kubelet[2632]: W0702 00:35:05.000994 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:35:05.001076 kubelet[2632]: E0702 00:35:05.001007 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:35:05.001181 kubelet[2632]: E0702 00:35:05.001164 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:35:05.001181 kubelet[2632]: W0702 00:35:05.001178 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:35:05.001242 kubelet[2632]: E0702 00:35:05.001190 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:35:05.002246 kubelet[2632]: E0702 00:35:05.002213 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:35:05.002246 kubelet[2632]: W0702 00:35:05.002231 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:35:05.002246 kubelet[2632]: E0702 00:35:05.002245 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:35:05.100392 kubelet[2632]: E0702 00:35:05.100263 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:35:05.100392 kubelet[2632]: W0702 00:35:05.100284 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:35:05.100392 kubelet[2632]: E0702 00:35:05.100305 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:35:05.101336 kubelet[2632]: E0702 00:35:05.101233 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:35:05.101336 kubelet[2632]: W0702 00:35:05.101246 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:35:05.101336 kubelet[2632]: E0702 00:35:05.101260 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:35:05.101502 kubelet[2632]: E0702 00:35:05.101492 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:35:05.101664 kubelet[2632]: W0702 00:35:05.101558 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:35:05.101664 kubelet[2632]: E0702 00:35:05.101575 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:35:05.101794 kubelet[2632]: E0702 00:35:05.101784 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:35:05.101846 kubelet[2632]: W0702 00:35:05.101838 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:35:05.101916 kubelet[2632]: E0702 00:35:05.101901 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:35:05.103265 kubelet[2632]: E0702 00:35:05.102244 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:35:05.103460 kubelet[2632]: W0702 00:35:05.103341 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:35:05.103460 kubelet[2632]: E0702 00:35:05.103363 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:35:05.103595 kubelet[2632]: E0702 00:35:05.103585 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:35:05.103690 kubelet[2632]: W0702 00:35:05.103679 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:35:05.103751 kubelet[2632]: E0702 00:35:05.103743 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:35:05.205253 kubelet[2632]: E0702 00:35:05.205008 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:35:05.205253 kubelet[2632]: W0702 00:35:05.205027 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:35:05.205253 kubelet[2632]: E0702 00:35:05.205093 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:35:05.205685 kubelet[2632]: E0702 00:35:05.205607 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:35:05.205685 kubelet[2632]: W0702 00:35:05.205619 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:35:05.205685 kubelet[2632]: E0702 00:35:05.205634 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:35:05.206055 kubelet[2632]: E0702 00:35:05.205931 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:35:05.206055 kubelet[2632]: W0702 00:35:05.205942 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:35:05.206055 kubelet[2632]: E0702 00:35:05.205954 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:35:05.206500 kubelet[2632]: E0702 00:35:05.206319 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:35:05.206500 kubelet[2632]: W0702 00:35:05.206330 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:35:05.206500 kubelet[2632]: E0702 00:35:05.206344 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:35:05.206813 kubelet[2632]: E0702 00:35:05.206653 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:35:05.206813 kubelet[2632]: W0702 00:35:05.206664 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:35:05.206813 kubelet[2632]: E0702 00:35:05.206676 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:35:05.206977 kubelet[2632]: E0702 00:35:05.206928 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:35:05.206977 kubelet[2632]: W0702 00:35:05.206939 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:35:05.206977 kubelet[2632]: E0702 00:35:05.206951 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:35:05.307649 kubelet[2632]: E0702 00:35:05.307584 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:35:05.308052 kubelet[2632]: W0702 00:35:05.308030 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:35:05.308136 kubelet[2632]: E0702 00:35:05.308087 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:35:05.308450 kubelet[2632]: E0702 00:35:05.308423 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:35:05.308450 kubelet[2632]: W0702 00:35:05.308437 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:35:05.308450 kubelet[2632]: E0702 00:35:05.308451 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:35:05.308798 kubelet[2632]: E0702 00:35:05.308783 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:35:05.308798 kubelet[2632]: W0702 00:35:05.308797 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:35:05.308798 kubelet[2632]: E0702 00:35:05.308811 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:35:05.309292 kubelet[2632]: E0702 00:35:05.309277 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:35:05.309292 kubelet[2632]: W0702 00:35:05.309291 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:35:05.309375 kubelet[2632]: E0702 00:35:05.309304 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:35:05.309852 kubelet[2632]: E0702 00:35:05.309828 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:35:05.309852 kubelet[2632]: W0702 00:35:05.309843 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:35:05.310084 kubelet[2632]: E0702 00:35:05.309856 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:35:05.310912 kubelet[2632]: E0702 00:35:05.310892 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:35:05.310959 kubelet[2632]: W0702 00:35:05.310917 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:35:05.310959 kubelet[2632]: E0702 00:35:05.310932 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:35:05.412292 kubelet[2632]: E0702 00:35:05.412227 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:35:05.412292 kubelet[2632]: W0702 00:35:05.412248 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:35:05.412681 kubelet[2632]: E0702 00:35:05.412367 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:35:05.412865 kubelet[2632]: E0702 00:35:05.412766 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:35:05.412865 kubelet[2632]: W0702 00:35:05.412778 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:35:05.412865 kubelet[2632]: E0702 00:35:05.412792 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:35:05.413334 kubelet[2632]: E0702 00:35:05.413209 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:35:05.413334 kubelet[2632]: W0702 00:35:05.413219 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:35:05.413334 kubelet[2632]: E0702 00:35:05.413232 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:35:05.413657 kubelet[2632]: E0702 00:35:05.413475 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:35:05.413657 kubelet[2632]: W0702 00:35:05.413484 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:35:05.413657 kubelet[2632]: E0702 00:35:05.413509 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:35:05.414017 kubelet[2632]: E0702 00:35:05.413900 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:35:05.414017 kubelet[2632]: W0702 00:35:05.413922 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:35:05.414017 kubelet[2632]: E0702 00:35:05.413935 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:35:05.414246 kubelet[2632]: E0702 00:35:05.414237 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:35:05.414373 kubelet[2632]: W0702 00:35:05.414285 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:35:05.414373 kubelet[2632]: E0702 00:35:05.414300 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:35:05.515858 kubelet[2632]: E0702 00:35:05.515120 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:35:05.515858 kubelet[2632]: W0702 00:35:05.515145 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:35:05.515858 kubelet[2632]: E0702 00:35:05.515167 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:35:05.515858 kubelet[2632]: E0702 00:35:05.515570 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:35:05.515858 kubelet[2632]: W0702 00:35:05.515580 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:35:05.515858 kubelet[2632]: E0702 00:35:05.515593 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:35:05.515858 kubelet[2632]: E0702 00:35:05.515732 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:35:05.515858 kubelet[2632]: W0702 00:35:05.515741 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:35:05.515858 kubelet[2632]: E0702 00:35:05.515753 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:35:05.518081 kubelet[2632]: E0702 00:35:05.518036 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:35:05.518081 kubelet[2632]: W0702 00:35:05.518052 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:35:05.518360 kubelet[2632]: E0702 00:35:05.518095 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:35:05.518491 kubelet[2632]: E0702 00:35:05.518388 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:35:05.518491 kubelet[2632]: W0702 00:35:05.518397 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:35:05.518491 kubelet[2632]: E0702 00:35:05.518410 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:35:05.518709 kubelet[2632]: E0702 00:35:05.518691 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:35:05.518831 kubelet[2632]: W0702 00:35:05.518808 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:35:05.518831 kubelet[2632]: E0702 00:35:05.518832 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:35:05.543251 kubelet[2632]: E0702 00:35:05.543165 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:35:05.543251 kubelet[2632]: W0702 00:35:05.543185 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:35:05.543251 kubelet[2632]: E0702 00:35:05.543205 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:35:05.619846 kubelet[2632]: E0702 00:35:05.619699 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:35:05.619846 kubelet[2632]: W0702 00:35:05.619720 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:35:05.619846 kubelet[2632]: E0702 00:35:05.619741 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:35:05.620451 kubelet[2632]: E0702 00:35:05.620326 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:35:05.620451 kubelet[2632]: W0702 00:35:05.620339 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:35:05.620451 kubelet[2632]: E0702 00:35:05.620352 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:35:05.620765 kubelet[2632]: E0702 00:35:05.620553 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:35:05.620765 kubelet[2632]: W0702 00:35:05.620562 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:35:05.620765 kubelet[2632]: E0702 00:35:05.620589 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:35:05.620892 kubelet[2632]: E0702 00:35:05.620881 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:35:05.621110 kubelet[2632]: W0702 00:35:05.621095 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:35:05.621199 kubelet[2632]: E0702 00:35:05.621189 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:35:05.621551 kubelet[2632]: E0702 00:35:05.621452 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:35:05.621616 kubelet[2632]: W0702 00:35:05.621605 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:35:05.621705 kubelet[2632]: E0702 00:35:05.621664 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:35:05.722893 kubelet[2632]: E0702 00:35:05.722819 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:35:05.722893 kubelet[2632]: W0702 00:35:05.722853 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:35:05.722893 kubelet[2632]: E0702 00:35:05.722881 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:35:05.724111 kubelet[2632]: E0702 00:35:05.723621 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:35:05.724111 kubelet[2632]: W0702 00:35:05.723632 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:35:05.724111 kubelet[2632]: E0702 00:35:05.723648 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:35:05.725706 kubelet[2632]: E0702 00:35:05.725648 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:35:05.725706 kubelet[2632]: W0702 00:35:05.725666 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:35:05.725706 kubelet[2632]: E0702 00:35:05.725681 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:35:05.726356 kubelet[2632]: E0702 00:35:05.726313 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:35:05.726356 kubelet[2632]: W0702 00:35:05.726330 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:35:05.726577 kubelet[2632]: E0702 00:35:05.726536 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:35:05.727169 kubelet[2632]: E0702 00:35:05.727149 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:35:05.727169 kubelet[2632]: W0702 00:35:05.727161 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:35:05.727169 kubelet[2632]: E0702 00:35:05.727177 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:35:05.749294 kubelet[2632]: E0702 00:35:05.749245 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:35:05.749294 kubelet[2632]: W0702 00:35:05.749266 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:35:05.749294 kubelet[2632]: E0702 00:35:05.749287 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:35:05.757423 kubelet[2632]: E0702 00:35:05.757392 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:35:05.757423 kubelet[2632]: W0702 00:35:05.757415 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:35:05.757602 kubelet[2632]: E0702 00:35:05.757437 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:35:05.805632 kubelet[2632]: E0702 00:35:05.805492 2632 projected.go:294] Couldn't get configMap calico-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jul 2 00:35:05.805632 kubelet[2632]: E0702 00:35:05.805540 2632 projected.go:200] Error preparing data for projected volume kube-api-access-qqzmz for pod calico-system/calico-typha-89866bd7c-nbtfm: failed to sync configmap cache: timed out waiting for the condition Jul 2 00:35:05.807163 kubelet[2632]: E0702 00:35:05.806887 2632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/483ea29d-8fe5-4d1c-a83f-169e2a40a5a5-kube-api-access-qqzmz podName:483ea29d-8fe5-4d1c-a83f-169e2a40a5a5 nodeName:}" failed. No retries permitted until 2024-07-02 00:35:06.306859683 +0000 UTC m=+23.190103640 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-qqzmz" (UniqueName: "kubernetes.io/projected/483ea29d-8fe5-4d1c-a83f-169e2a40a5a5-kube-api-access-qqzmz") pod "calico-typha-89866bd7c-nbtfm" (UID: "483ea29d-8fe5-4d1c-a83f-169e2a40a5a5") : failed to sync configmap cache: timed out waiting for the condition Jul 2 00:35:05.828569 kubelet[2632]: E0702 00:35:05.828510 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:35:05.828720 kubelet[2632]: W0702 00:35:05.828546 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:35:05.828720 kubelet[2632]: E0702 00:35:05.828610 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:35:05.829530 kubelet[2632]: E0702 00:35:05.829512 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:35:05.829530 kubelet[2632]: W0702 00:35:05.829528 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:35:05.829613 kubelet[2632]: E0702 00:35:05.829544 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:35:05.830211 kubelet[2632]: E0702 00:35:05.830162 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:35:05.830452 kubelet[2632]: W0702 00:35:05.830291 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:35:05.830452 kubelet[2632]: E0702 00:35:05.830320 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:35:05.850538 kubelet[2632]: E0702 00:35:05.849853 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:35:05.850538 kubelet[2632]: W0702 00:35:05.849875 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:35:05.850538 kubelet[2632]: E0702 00:35:05.849896 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:35:05.860115 kubelet[2632]: E0702 00:35:05.860034 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:35:05.860115 kubelet[2632]: W0702 00:35:05.860093 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:35:05.860115 kubelet[2632]: E0702 00:35:05.860123 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:35:05.908478 containerd[1444]: time="2024-07-02T00:35:05.908016780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4mjsk,Uid:daccc5c1-ca44-43cb-adf1-f0bb45d681bb,Namespace:calico-system,Attempt:0,}" Jul 2 00:35:05.932239 kubelet[2632]: E0702 00:35:05.932046 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:35:05.932239 kubelet[2632]: W0702 00:35:05.932138 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:35:05.932239 kubelet[2632]: E0702 00:35:05.932168 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:35:05.972052 containerd[1444]: time="2024-07-02T00:35:05.970559039Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:35:05.972052 containerd[1444]: time="2024-07-02T00:35:05.971482388Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:35:05.972052 containerd[1444]: time="2024-07-02T00:35:05.971517037Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:35:05.972052 containerd[1444]: time="2024-07-02T00:35:05.971570213Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:35:06.003790 systemd[1]: Started cri-containerd-20d5de4acba00361d37369105f74a979d943640f30ae5726d85a668381aeeea9.scope - libcontainer container 20d5de4acba00361d37369105f74a979d943640f30ae5726d85a668381aeeea9. Jul 2 00:35:06.033723 kubelet[2632]: E0702 00:35:06.033401 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:35:06.033723 kubelet[2632]: W0702 00:35:06.033537 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:35:06.033723 kubelet[2632]: E0702 00:35:06.033567 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:35:06.049912 containerd[1444]: time="2024-07-02T00:35:06.049835626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4mjsk,Uid:daccc5c1-ca44-43cb-adf1-f0bb45d681bb,Namespace:calico-system,Attempt:0,} returns sandbox id \"20d5de4acba00361d37369105f74a979d943640f30ae5726d85a668381aeeea9\"" Jul 2 00:35:06.053691 containerd[1444]: time="2024-07-02T00:35:06.053634940Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Jul 2 00:35:06.135250 kubelet[2632]: E0702 00:35:06.135137 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:35:06.135250 kubelet[2632]: W0702 00:35:06.135165 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:35:06.135250 kubelet[2632]: E0702 00:35:06.135194 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:35:06.237720 kubelet[2632]: E0702 00:35:06.237653 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:35:06.237720 kubelet[2632]: W0702 00:35:06.237699 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:35:06.238134 kubelet[2632]: E0702 00:35:06.237780 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:35:06.341324 kubelet[2632]: E0702 00:35:06.340942 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:35:06.342215 kubelet[2632]: W0702 00:35:06.341256 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:35:06.342215 kubelet[2632]: E0702 00:35:06.341824 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:35:06.344519 kubelet[2632]: E0702 00:35:06.344143 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:35:06.344519 kubelet[2632]: W0702 00:35:06.344186 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:35:06.344519 kubelet[2632]: E0702 00:35:06.344274 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:35:06.347168 kubelet[2632]: E0702 00:35:06.345976 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:35:06.347168 kubelet[2632]: W0702 00:35:06.346004 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:35:06.347168 kubelet[2632]: E0702 00:35:06.346035 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:35:06.347578 kubelet[2632]: E0702 00:35:06.347234 2632 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9ksc4" podUID="f825e197-24d6-43c1-8001-acbd6a4ca977" Jul 2 00:35:06.354484 kubelet[2632]: E0702 00:35:06.353968 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:35:06.354484 kubelet[2632]: W0702 00:35:06.354039 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:35:06.354484 kubelet[2632]: E0702 00:35:06.354144 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:35:06.356441 kubelet[2632]: E0702 00:35:06.355299 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:35:06.356441 kubelet[2632]: W0702 00:35:06.355317 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:35:06.356441 kubelet[2632]: E0702 00:35:06.355662 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:35:06.369235 kubelet[2632]: E0702 00:35:06.369192 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:35:06.369235 kubelet[2632]: W0702 00:35:06.369212 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:35:06.369235 kubelet[2632]: E0702 00:35:06.369232 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:35:06.409046 containerd[1444]: time="2024-07-02T00:35:06.408469706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-89866bd7c-nbtfm,Uid:483ea29d-8fe5-4d1c-a83f-169e2a40a5a5,Namespace:calico-system,Attempt:0,}" Jul 2 00:35:06.453298 containerd[1444]: time="2024-07-02T00:35:06.451977370Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:35:06.453298 containerd[1444]: time="2024-07-02T00:35:06.452140275Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:35:06.453298 containerd[1444]: time="2024-07-02T00:35:06.452169232Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:35:06.453298 containerd[1444]: time="2024-07-02T00:35:06.452189092Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:35:06.489469 systemd[1]: Started cri-containerd-ab40409f176ecc174edc4cef473b770a88fcacab3c14e075c8afb3b81b028cd3.scope - libcontainer container ab40409f176ecc174edc4cef473b770a88fcacab3c14e075c8afb3b81b028cd3. Jul 2 00:35:06.613053 containerd[1444]: time="2024-07-02T00:35:06.612828394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-89866bd7c-nbtfm,Uid:483ea29d-8fe5-4d1c-a83f-169e2a40a5a5,Namespace:calico-system,Attempt:0,} returns sandbox id \"ab40409f176ecc174edc4cef473b770a88fcacab3c14e075c8afb3b81b028cd3\"" Jul 2 00:35:08.234508 containerd[1444]: time="2024-07-02T00:35:08.233459095Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:35:08.235124 containerd[1444]: time="2024-07-02T00:35:08.235051342Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=5140568" Jul 2 00:35:08.237536 containerd[1444]: time="2024-07-02T00:35:08.237496244Z" level=info msg="ImageCreate event name:\"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:35:08.244700 containerd[1444]: time="2024-07-02T00:35:08.244658017Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:35:08.250008 containerd[1444]: time="2024-07-02T00:35:08.249868550Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6588288\" in 2.196183581s" Jul 2 00:35:08.250008 containerd[1444]: time="2024-07-02T00:35:08.249918018Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\"" Jul 2 00:35:08.253200 containerd[1444]: time="2024-07-02T00:35:08.252900129Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\"" Jul 2 00:35:08.258368 containerd[1444]: time="2024-07-02T00:35:08.258329728Z" level=info msg="CreateContainer within sandbox \"20d5de4acba00361d37369105f74a979d943640f30ae5726d85a668381aeeea9\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 2 00:35:08.286615 containerd[1444]: time="2024-07-02T00:35:08.286392828Z" level=info msg="CreateContainer within sandbox \"20d5de4acba00361d37369105f74a979d943640f30ae5726d85a668381aeeea9\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"60bc2875ccf83f58de6177b46f5f07fd592dc640297a6c35226f0db9f2aced9d\"" Jul 2 00:35:08.289560 containerd[1444]: time="2024-07-02T00:35:08.288309150Z" level=info msg="StartContainer for \"60bc2875ccf83f58de6177b46f5f07fd592dc640297a6c35226f0db9f2aced9d\"" Jul 2 00:35:08.346236 systemd[1]: Started cri-containerd-60bc2875ccf83f58de6177b46f5f07fd592dc640297a6c35226f0db9f2aced9d.scope - libcontainer container 60bc2875ccf83f58de6177b46f5f07fd592dc640297a6c35226f0db9f2aced9d. Jul 2 00:35:08.348103 kubelet[2632]: E0702 00:35:08.347811 2632 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9ksc4" podUID="f825e197-24d6-43c1-8001-acbd6a4ca977" Jul 2 00:35:08.392285 containerd[1444]: time="2024-07-02T00:35:08.392237299Z" level=info msg="StartContainer for \"60bc2875ccf83f58de6177b46f5f07fd592dc640297a6c35226f0db9f2aced9d\" returns successfully" Jul 2 00:35:08.411549 systemd[1]: cri-containerd-60bc2875ccf83f58de6177b46f5f07fd592dc640297a6c35226f0db9f2aced9d.scope: Deactivated successfully. Jul 2 00:35:08.449255 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-60bc2875ccf83f58de6177b46f5f07fd592dc640297a6c35226f0db9f2aced9d-rootfs.mount: Deactivated successfully. Jul 2 00:35:08.529117 containerd[1444]: time="2024-07-02T00:35:08.528947946Z" level=info msg="shim disconnected" id=60bc2875ccf83f58de6177b46f5f07fd592dc640297a6c35226f0db9f2aced9d namespace=k8s.io Jul 2 00:35:08.529117 containerd[1444]: time="2024-07-02T00:35:08.529022925Z" level=warning msg="cleaning up after shim disconnected" id=60bc2875ccf83f58de6177b46f5f07fd592dc640297a6c35226f0db9f2aced9d namespace=k8s.io Jul 2 00:35:08.529117 containerd[1444]: time="2024-07-02T00:35:08.529037954Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:35:10.348216 kubelet[2632]: E0702 00:35:10.346944 2632 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9ksc4" podUID="f825e197-24d6-43c1-8001-acbd6a4ca977" Jul 2 00:35:12.259139 containerd[1444]: time="2024-07-02T00:35:12.259099278Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:35:12.261127 containerd[1444]: time="2024-07-02T00:35:12.261087521Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.0: active requests=0, bytes read=29458030" Jul 2 00:35:12.263158 containerd[1444]: time="2024-07-02T00:35:12.262796015Z" level=info msg="ImageCreate event name:\"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:35:12.266023 containerd[1444]: time="2024-07-02T00:35:12.265979182Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:35:12.266867 containerd[1444]: time="2024-07-02T00:35:12.266840712Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.0\" with image id \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\", size \"30905782\" in 4.013885104s" Jul 2 00:35:12.266949 containerd[1444]: time="2024-07-02T00:35:12.266933161Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\"" Jul 2 00:35:12.267731 containerd[1444]: time="2024-07-02T00:35:12.267525743Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Jul 2 00:35:12.285171 containerd[1444]: time="2024-07-02T00:35:12.285131977Z" level=info msg="CreateContainer within sandbox \"ab40409f176ecc174edc4cef473b770a88fcacab3c14e075c8afb3b81b028cd3\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 2 00:35:12.306354 containerd[1444]: time="2024-07-02T00:35:12.306226706Z" level=info msg="CreateContainer within sandbox \"ab40409f176ecc174edc4cef473b770a88fcacab3c14e075c8afb3b81b028cd3\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"f5520e0504a84fd486f258462c17d8c2e9fcd0051380e714bdd4e923800b05e1\"" Jul 2 00:35:12.306932 containerd[1444]: time="2024-07-02T00:35:12.306892750Z" level=info msg="StartContainer for \"f5520e0504a84fd486f258462c17d8c2e9fcd0051380e714bdd4e923800b05e1\"" Jul 2 00:35:12.342253 systemd[1]: Started cri-containerd-f5520e0504a84fd486f258462c17d8c2e9fcd0051380e714bdd4e923800b05e1.scope - libcontainer container f5520e0504a84fd486f258462c17d8c2e9fcd0051380e714bdd4e923800b05e1. Jul 2 00:35:12.347843 kubelet[2632]: E0702 00:35:12.347422 2632 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9ksc4" podUID="f825e197-24d6-43c1-8001-acbd6a4ca977" Jul 2 00:35:12.393773 containerd[1444]: time="2024-07-02T00:35:12.393718322Z" level=info msg="StartContainer for \"f5520e0504a84fd486f258462c17d8c2e9fcd0051380e714bdd4e923800b05e1\" returns successfully" Jul 2 00:35:12.524858 kubelet[2632]: I0702 00:35:12.524742 2632 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-89866bd7c-nbtfm" podStartSLOduration=2.873194442 podStartE2EDuration="8.524697089s" podCreationTimestamp="2024-07-02 00:35:04 +0000 UTC" firstStartedPulling="2024-07-02 00:35:06.615705933 +0000 UTC m=+23.498949891" lastFinishedPulling="2024-07-02 00:35:12.267208581 +0000 UTC m=+29.150452538" observedRunningTime="2024-07-02 00:35:12.524392832 +0000 UTC m=+29.407636789" watchObservedRunningTime="2024-07-02 00:35:12.524697089 +0000 UTC m=+29.407941046" Jul 2 00:35:13.510824 kubelet[2632]: I0702 00:35:13.510782 2632 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 2 00:35:14.349821 kubelet[2632]: E0702 00:35:14.347436 2632 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9ksc4" podUID="f825e197-24d6-43c1-8001-acbd6a4ca977" Jul 2 00:35:16.346897 kubelet[2632]: E0702 00:35:16.346832 2632 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9ksc4" podUID="f825e197-24d6-43c1-8001-acbd6a4ca977" Jul 2 00:35:18.117402 containerd[1444]: time="2024-07-02T00:35:18.117341920Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:35:18.119390 containerd[1444]: time="2024-07-02T00:35:18.119084738Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=93087850" Jul 2 00:35:18.120714 containerd[1444]: time="2024-07-02T00:35:18.120645589Z" level=info msg="ImageCreate event name:\"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:35:18.124049 containerd[1444]: time="2024-07-02T00:35:18.124001987Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:35:18.125696 containerd[1444]: time="2024-07-02T00:35:18.125177639Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"94535610\" in 5.85762008s" Jul 2 00:35:18.125696 containerd[1444]: time="2024-07-02T00:35:18.125216909Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\"" Jul 2 00:35:18.128087 containerd[1444]: time="2024-07-02T00:35:18.128009437Z" level=info msg="CreateContainer within sandbox \"20d5de4acba00361d37369105f74a979d943640f30ae5726d85a668381aeeea9\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 2 00:35:18.154913 containerd[1444]: time="2024-07-02T00:35:18.154851801Z" level=info msg="CreateContainer within sandbox \"20d5de4acba00361d37369105f74a979d943640f30ae5726d85a668381aeeea9\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"ab8a41d2e61849dcc81345506c0c75d11f9a6af5e475b9c794c4420898855479\"" Jul 2 00:35:18.157421 containerd[1444]: time="2024-07-02T00:35:18.155718894Z" level=info msg="StartContainer for \"ab8a41d2e61849dcc81345506c0c75d11f9a6af5e475b9c794c4420898855479\"" Jul 2 00:35:18.319420 systemd[1]: Started cri-containerd-ab8a41d2e61849dcc81345506c0c75d11f9a6af5e475b9c794c4420898855479.scope - libcontainer container ab8a41d2e61849dcc81345506c0c75d11f9a6af5e475b9c794c4420898855479. Jul 2 00:35:18.438298 kubelet[2632]: E0702 00:35:18.346682 2632 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9ksc4" podUID="f825e197-24d6-43c1-8001-acbd6a4ca977" Jul 2 00:35:18.638327 containerd[1444]: time="2024-07-02T00:35:18.638251237Z" level=info msg="StartContainer for \"ab8a41d2e61849dcc81345506c0c75d11f9a6af5e475b9c794c4420898855479\" returns successfully" Jul 2 00:35:20.192731 systemd[1]: cri-containerd-ab8a41d2e61849dcc81345506c0c75d11f9a6af5e475b9c794c4420898855479.scope: Deactivated successfully. Jul 2 00:35:20.232493 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ab8a41d2e61849dcc81345506c0c75d11f9a6af5e475b9c794c4420898855479-rootfs.mount: Deactivated successfully. Jul 2 00:35:20.254116 containerd[1444]: time="2024-07-02T00:35:20.254006987Z" level=info msg="shim disconnected" id=ab8a41d2e61849dcc81345506c0c75d11f9a6af5e475b9c794c4420898855479 namespace=k8s.io Jul 2 00:35:20.254116 containerd[1444]: time="2024-07-02T00:35:20.254095424Z" level=warning msg="cleaning up after shim disconnected" id=ab8a41d2e61849dcc81345506c0c75d11f9a6af5e475b9c794c4420898855479 namespace=k8s.io Jul 2 00:35:20.254116 containerd[1444]: time="2024-07-02T00:35:20.254106741Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:35:20.292112 kubelet[2632]: I0702 00:35:20.292079 2632 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jul 2 00:35:20.329168 kubelet[2632]: I0702 00:35:20.329053 2632 topology_manager.go:215] "Topology Admit Handler" podUID="f76e7cb0-2dbd-4d82-8219-6278834c7267" podNamespace="kube-system" podName="coredns-76f75df574-q8tqz" Jul 2 00:35:20.336662 kubelet[2632]: I0702 00:35:20.335937 2632 topology_manager.go:215] "Topology Admit Handler" podUID="4aa0f677-0725-46c4-8993-0c9903cb9cb0" podNamespace="kube-system" podName="coredns-76f75df574-wmfxs" Jul 2 00:35:20.340079 systemd[1]: Created slice kubepods-burstable-podf76e7cb0_2dbd_4d82_8219_6278834c7267.slice - libcontainer container kubepods-burstable-podf76e7cb0_2dbd_4d82_8219_6278834c7267.slice. Jul 2 00:35:20.343529 kubelet[2632]: I0702 00:35:20.341209 2632 topology_manager.go:215] "Topology Admit Handler" podUID="a34c27ef-25a3-4bac-90f8-f587a5b80a52" podNamespace="calico-system" podName="calico-kube-controllers-7548bf8497-6bsv7" Jul 2 00:35:20.351486 kubelet[2632]: I0702 00:35:20.350507 2632 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7w7p5\" (UniqueName: \"kubernetes.io/projected/f76e7cb0-2dbd-4d82-8219-6278834c7267-kube-api-access-7w7p5\") pod \"coredns-76f75df574-q8tqz\" (UID: \"f76e7cb0-2dbd-4d82-8219-6278834c7267\") " pod="kube-system/coredns-76f75df574-q8tqz" Jul 2 00:35:20.351486 kubelet[2632]: I0702 00:35:20.350600 2632 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f76e7cb0-2dbd-4d82-8219-6278834c7267-config-volume\") pod \"coredns-76f75df574-q8tqz\" (UID: \"f76e7cb0-2dbd-4d82-8219-6278834c7267\") " pod="kube-system/coredns-76f75df574-q8tqz" Jul 2 00:35:20.357798 systemd[1]: Created slice kubepods-burstable-pod4aa0f677_0725_46c4_8993_0c9903cb9cb0.slice - libcontainer container kubepods-burstable-pod4aa0f677_0725_46c4_8993_0c9903cb9cb0.slice. Jul 2 00:35:20.372838 systemd[1]: Created slice kubepods-besteffort-podf825e197_24d6_43c1_8001_acbd6a4ca977.slice - libcontainer container kubepods-besteffort-podf825e197_24d6_43c1_8001_acbd6a4ca977.slice. Jul 2 00:35:20.378268 containerd[1444]: time="2024-07-02T00:35:20.377610408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9ksc4,Uid:f825e197-24d6-43c1-8001-acbd6a4ca977,Namespace:calico-system,Attempt:0,}" Jul 2 00:35:20.384096 systemd[1]: Created slice kubepods-besteffort-poda34c27ef_25a3_4bac_90f8_f587a5b80a52.slice - libcontainer container kubepods-besteffort-poda34c27ef_25a3_4bac_90f8_f587a5b80a52.slice. Jul 2 00:35:20.454854 kubelet[2632]: I0702 00:35:20.451205 2632 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4aa0f677-0725-46c4-8993-0c9903cb9cb0-config-volume\") pod \"coredns-76f75df574-wmfxs\" (UID: \"4aa0f677-0725-46c4-8993-0c9903cb9cb0\") " pod="kube-system/coredns-76f75df574-wmfxs" Jul 2 00:35:20.454854 kubelet[2632]: I0702 00:35:20.451352 2632 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a34c27ef-25a3-4bac-90f8-f587a5b80a52-tigera-ca-bundle\") pod \"calico-kube-controllers-7548bf8497-6bsv7\" (UID: \"a34c27ef-25a3-4bac-90f8-f587a5b80a52\") " pod="calico-system/calico-kube-controllers-7548bf8497-6bsv7" Jul 2 00:35:20.454854 kubelet[2632]: I0702 00:35:20.451434 2632 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25d4v\" (UniqueName: \"kubernetes.io/projected/a34c27ef-25a3-4bac-90f8-f587a5b80a52-kube-api-access-25d4v\") pod \"calico-kube-controllers-7548bf8497-6bsv7\" (UID: \"a34c27ef-25a3-4bac-90f8-f587a5b80a52\") " pod="calico-system/calico-kube-controllers-7548bf8497-6bsv7" Jul 2 00:35:20.454854 kubelet[2632]: I0702 00:35:20.451500 2632 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sn7s4\" (UniqueName: \"kubernetes.io/projected/4aa0f677-0725-46c4-8993-0c9903cb9cb0-kube-api-access-sn7s4\") pod \"coredns-76f75df574-wmfxs\" (UID: \"4aa0f677-0725-46c4-8993-0c9903cb9cb0\") " pod="kube-system/coredns-76f75df574-wmfxs" Jul 2 00:35:20.542367 containerd[1444]: time="2024-07-02T00:35:20.541495545Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Jul 2 00:35:20.630165 containerd[1444]: time="2024-07-02T00:35:20.630100313Z" level=error msg="Failed to destroy network for sandbox \"b798bd5b0c7f3a6d3482321246f594534f1e455e2ba95619bc9840495362877c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:35:20.635683 containerd[1444]: time="2024-07-02T00:35:20.635632313Z" level=error msg="encountered an error cleaning up failed sandbox \"b798bd5b0c7f3a6d3482321246f594534f1e455e2ba95619bc9840495362877c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:35:20.635829 containerd[1444]: time="2024-07-02T00:35:20.635708771Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9ksc4,Uid:f825e197-24d6-43c1-8001-acbd6a4ca977,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b798bd5b0c7f3a6d3482321246f594534f1e455e2ba95619bc9840495362877c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:35:20.636014 kubelet[2632]: E0702 00:35:20.635973 2632 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b798bd5b0c7f3a6d3482321246f594534f1e455e2ba95619bc9840495362877c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:35:20.636096 kubelet[2632]: E0702 00:35:20.636051 2632 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b798bd5b0c7f3a6d3482321246f594534f1e455e2ba95619bc9840495362877c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9ksc4" Jul 2 00:35:20.636142 kubelet[2632]: E0702 00:35:20.636127 2632 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b798bd5b0c7f3a6d3482321246f594534f1e455e2ba95619bc9840495362877c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9ksc4" Jul 2 00:35:20.636213 kubelet[2632]: E0702 00:35:20.636199 2632 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9ksc4_calico-system(f825e197-24d6-43c1-8001-acbd6a4ca977)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9ksc4_calico-system(f825e197-24d6-43c1-8001-acbd6a4ca977)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b798bd5b0c7f3a6d3482321246f594534f1e455e2ba95619bc9840495362877c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9ksc4" podUID="f825e197-24d6-43c1-8001-acbd6a4ca977" Jul 2 00:35:20.654427 containerd[1444]: time="2024-07-02T00:35:20.654358539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-q8tqz,Uid:f76e7cb0-2dbd-4d82-8219-6278834c7267,Namespace:kube-system,Attempt:0,}" Jul 2 00:35:20.668504 containerd[1444]: time="2024-07-02T00:35:20.668106865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-wmfxs,Uid:4aa0f677-0725-46c4-8993-0c9903cb9cb0,Namespace:kube-system,Attempt:0,}" Jul 2 00:35:20.692120 containerd[1444]: time="2024-07-02T00:35:20.691741815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7548bf8497-6bsv7,Uid:a34c27ef-25a3-4bac-90f8-f587a5b80a52,Namespace:calico-system,Attempt:0,}" Jul 2 00:35:20.852188 containerd[1444]: time="2024-07-02T00:35:20.852132219Z" level=error msg="Failed to destroy network for sandbox \"1a0b01236b28779688961a6e5a2e4fd44ee4e181d93e344124e3c88260cd168d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:35:20.853532 containerd[1444]: time="2024-07-02T00:35:20.853363016Z" level=error msg="encountered an error cleaning up failed sandbox \"1a0b01236b28779688961a6e5a2e4fd44ee4e181d93e344124e3c88260cd168d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:35:20.853532 containerd[1444]: time="2024-07-02T00:35:20.853428146Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-q8tqz,Uid:f76e7cb0-2dbd-4d82-8219-6278834c7267,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1a0b01236b28779688961a6e5a2e4fd44ee4e181d93e344124e3c88260cd168d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:35:20.854369 kubelet[2632]: E0702 00:35:20.853840 2632 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a0b01236b28779688961a6e5a2e4fd44ee4e181d93e344124e3c88260cd168d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:35:20.854369 kubelet[2632]: E0702 00:35:20.853905 2632 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a0b01236b28779688961a6e5a2e4fd44ee4e181d93e344124e3c88260cd168d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-q8tqz" Jul 2 00:35:20.854369 kubelet[2632]: E0702 00:35:20.853934 2632 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a0b01236b28779688961a6e5a2e4fd44ee4e181d93e344124e3c88260cd168d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-q8tqz" Jul 2 00:35:20.854532 kubelet[2632]: E0702 00:35:20.854003 2632 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-q8tqz_kube-system(f76e7cb0-2dbd-4d82-8219-6278834c7267)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-q8tqz_kube-system(f76e7cb0-2dbd-4d82-8219-6278834c7267)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1a0b01236b28779688961a6e5a2e4fd44ee4e181d93e344124e3c88260cd168d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-q8tqz" podUID="f76e7cb0-2dbd-4d82-8219-6278834c7267" Jul 2 00:35:20.886681 containerd[1444]: time="2024-07-02T00:35:20.886095027Z" level=error msg="Failed to destroy network for sandbox \"f91170ed8c902e281c58992e0605f57f6a6f088fb13f4b3dd2c20e2a999f8215\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:35:20.888486 containerd[1444]: time="2024-07-02T00:35:20.887437166Z" level=error msg="encountered an error cleaning up failed sandbox \"f91170ed8c902e281c58992e0605f57f6a6f088fb13f4b3dd2c20e2a999f8215\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:35:20.888486 containerd[1444]: time="2024-07-02T00:35:20.887511231Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7548bf8497-6bsv7,Uid:a34c27ef-25a3-4bac-90f8-f587a5b80a52,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f91170ed8c902e281c58992e0605f57f6a6f088fb13f4b3dd2c20e2a999f8215\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:35:20.889085 kubelet[2632]: E0702 00:35:20.888742 2632 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f91170ed8c902e281c58992e0605f57f6a6f088fb13f4b3dd2c20e2a999f8215\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:35:20.889085 kubelet[2632]: E0702 00:35:20.888976 2632 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f91170ed8c902e281c58992e0605f57f6a6f088fb13f4b3dd2c20e2a999f8215\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7548bf8497-6bsv7" Jul 2 00:35:20.889085 kubelet[2632]: E0702 00:35:20.889014 2632 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f91170ed8c902e281c58992e0605f57f6a6f088fb13f4b3dd2c20e2a999f8215\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7548bf8497-6bsv7" Jul 2 00:35:20.889974 kubelet[2632]: E0702 00:35:20.889942 2632 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7548bf8497-6bsv7_calico-system(a34c27ef-25a3-4bac-90f8-f587a5b80a52)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7548bf8497-6bsv7_calico-system(a34c27ef-25a3-4bac-90f8-f587a5b80a52)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f91170ed8c902e281c58992e0605f57f6a6f088fb13f4b3dd2c20e2a999f8215\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7548bf8497-6bsv7" podUID="a34c27ef-25a3-4bac-90f8-f587a5b80a52" Jul 2 00:35:20.904842 containerd[1444]: time="2024-07-02T00:35:20.904675348Z" level=error msg="Failed to destroy network for sandbox \"9282b1be0a3390b817103ba21e6911c3040d3eb0c0cbc2c471f8e7d110cfd3c6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:35:20.912593 containerd[1444]: time="2024-07-02T00:35:20.905280816Z" level=error msg="encountered an error cleaning up failed sandbox \"9282b1be0a3390b817103ba21e6911c3040d3eb0c0cbc2c471f8e7d110cfd3c6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:35:20.912593 containerd[1444]: time="2024-07-02T00:35:20.905342301Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-wmfxs,Uid:4aa0f677-0725-46c4-8993-0c9903cb9cb0,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9282b1be0a3390b817103ba21e6911c3040d3eb0c0cbc2c471f8e7d110cfd3c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:35:20.912703 kubelet[2632]: E0702 00:35:20.905668 2632 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9282b1be0a3390b817103ba21e6911c3040d3eb0c0cbc2c471f8e7d110cfd3c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:35:20.912703 kubelet[2632]: E0702 00:35:20.905731 2632 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9282b1be0a3390b817103ba21e6911c3040d3eb0c0cbc2c471f8e7d110cfd3c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-wmfxs" Jul 2 00:35:20.912703 kubelet[2632]: E0702 00:35:20.905759 2632 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9282b1be0a3390b817103ba21e6911c3040d3eb0c0cbc2c471f8e7d110cfd3c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-wmfxs" Jul 2 00:35:20.912809 kubelet[2632]: E0702 00:35:20.905827 2632 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-wmfxs_kube-system(4aa0f677-0725-46c4-8993-0c9903cb9cb0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-wmfxs_kube-system(4aa0f677-0725-46c4-8993-0c9903cb9cb0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9282b1be0a3390b817103ba21e6911c3040d3eb0c0cbc2c471f8e7d110cfd3c6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-wmfxs" podUID="4aa0f677-0725-46c4-8993-0c9903cb9cb0" Jul 2 00:35:21.241451 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b798bd5b0c7f3a6d3482321246f594534f1e455e2ba95619bc9840495362877c-shm.mount: Deactivated successfully. Jul 2 00:35:21.547261 kubelet[2632]: I0702 00:35:21.546461 2632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9282b1be0a3390b817103ba21e6911c3040d3eb0c0cbc2c471f8e7d110cfd3c6" Jul 2 00:35:21.557334 kubelet[2632]: I0702 00:35:21.553240 2632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b798bd5b0c7f3a6d3482321246f594534f1e455e2ba95619bc9840495362877c" Jul 2 00:35:21.566274 containerd[1444]: time="2024-07-02T00:35:21.566169431Z" level=info msg="StopPodSandbox for \"b798bd5b0c7f3a6d3482321246f594534f1e455e2ba95619bc9840495362877c\"" Jul 2 00:35:21.581166 containerd[1444]: time="2024-07-02T00:35:21.579908761Z" level=info msg="StopPodSandbox for \"9282b1be0a3390b817103ba21e6911c3040d3eb0c0cbc2c471f8e7d110cfd3c6\"" Jul 2 00:35:21.592658 containerd[1444]: time="2024-07-02T00:35:21.592122740Z" level=info msg="Ensure that sandbox 9282b1be0a3390b817103ba21e6911c3040d3eb0c0cbc2c471f8e7d110cfd3c6 in task-service has been cleanup successfully" Jul 2 00:35:21.612516 containerd[1444]: time="2024-07-02T00:35:21.612415807Z" level=info msg="Ensure that sandbox b798bd5b0c7f3a6d3482321246f594534f1e455e2ba95619bc9840495362877c in task-service has been cleanup successfully" Jul 2 00:35:21.622979 kubelet[2632]: I0702 00:35:21.619989 2632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f91170ed8c902e281c58992e0605f57f6a6f088fb13f4b3dd2c20e2a999f8215" Jul 2 00:35:21.652424 containerd[1444]: time="2024-07-02T00:35:21.652331363Z" level=info msg="StopPodSandbox for \"f91170ed8c902e281c58992e0605f57f6a6f088fb13f4b3dd2c20e2a999f8215\"" Jul 2 00:35:21.654668 containerd[1444]: time="2024-07-02T00:35:21.653632633Z" level=info msg="Ensure that sandbox f91170ed8c902e281c58992e0605f57f6a6f088fb13f4b3dd2c20e2a999f8215 in task-service has been cleanup successfully" Jul 2 00:35:21.678339 kubelet[2632]: I0702 00:35:21.678304 2632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a0b01236b28779688961a6e5a2e4fd44ee4e181d93e344124e3c88260cd168d" Jul 2 00:35:21.681244 containerd[1444]: time="2024-07-02T00:35:21.681188272Z" level=info msg="StopPodSandbox for \"1a0b01236b28779688961a6e5a2e4fd44ee4e181d93e344124e3c88260cd168d\"" Jul 2 00:35:21.681614 containerd[1444]: time="2024-07-02T00:35:21.681574805Z" level=info msg="Ensure that sandbox 1a0b01236b28779688961a6e5a2e4fd44ee4e181d93e344124e3c88260cd168d in task-service has been cleanup successfully" Jul 2 00:35:21.746847 containerd[1444]: time="2024-07-02T00:35:21.746782519Z" level=error msg="StopPodSandbox for \"f91170ed8c902e281c58992e0605f57f6a6f088fb13f4b3dd2c20e2a999f8215\" failed" error="failed to destroy network for sandbox \"f91170ed8c902e281c58992e0605f57f6a6f088fb13f4b3dd2c20e2a999f8215\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:35:21.747448 kubelet[2632]: E0702 00:35:21.747051 2632 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f91170ed8c902e281c58992e0605f57f6a6f088fb13f4b3dd2c20e2a999f8215\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f91170ed8c902e281c58992e0605f57f6a6f088fb13f4b3dd2c20e2a999f8215" Jul 2 00:35:21.747448 kubelet[2632]: E0702 00:35:21.747403 2632 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f91170ed8c902e281c58992e0605f57f6a6f088fb13f4b3dd2c20e2a999f8215"} Jul 2 00:35:21.747704 kubelet[2632]: E0702 00:35:21.747600 2632 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a34c27ef-25a3-4bac-90f8-f587a5b80a52\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f91170ed8c902e281c58992e0605f57f6a6f088fb13f4b3dd2c20e2a999f8215\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 00:35:21.747704 kubelet[2632]: E0702 00:35:21.747681 2632 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a34c27ef-25a3-4bac-90f8-f587a5b80a52\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f91170ed8c902e281c58992e0605f57f6a6f088fb13f4b3dd2c20e2a999f8215\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7548bf8497-6bsv7" podUID="a34c27ef-25a3-4bac-90f8-f587a5b80a52" Jul 2 00:35:21.748247 containerd[1444]: time="2024-07-02T00:35:21.748118033Z" level=error msg="StopPodSandbox for \"b798bd5b0c7f3a6d3482321246f594534f1e455e2ba95619bc9840495362877c\" failed" error="failed to destroy network for sandbox \"b798bd5b0c7f3a6d3482321246f594534f1e455e2ba95619bc9840495362877c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:35:21.748520 kubelet[2632]: E0702 00:35:21.748389 2632 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b798bd5b0c7f3a6d3482321246f594534f1e455e2ba95619bc9840495362877c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b798bd5b0c7f3a6d3482321246f594534f1e455e2ba95619bc9840495362877c" Jul 2 00:35:21.748520 kubelet[2632]: E0702 00:35:21.748417 2632 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b798bd5b0c7f3a6d3482321246f594534f1e455e2ba95619bc9840495362877c"} Jul 2 00:35:21.748520 kubelet[2632]: E0702 00:35:21.748453 2632 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f825e197-24d6-43c1-8001-acbd6a4ca977\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b798bd5b0c7f3a6d3482321246f594534f1e455e2ba95619bc9840495362877c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 00:35:21.748520 kubelet[2632]: E0702 00:35:21.748495 2632 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f825e197-24d6-43c1-8001-acbd6a4ca977\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b798bd5b0c7f3a6d3482321246f594534f1e455e2ba95619bc9840495362877c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9ksc4" podUID="f825e197-24d6-43c1-8001-acbd6a4ca977" Jul 2 00:35:21.752166 containerd[1444]: time="2024-07-02T00:35:21.752090722Z" level=error msg="StopPodSandbox for \"9282b1be0a3390b817103ba21e6911c3040d3eb0c0cbc2c471f8e7d110cfd3c6\" failed" error="failed to destroy network for sandbox \"9282b1be0a3390b817103ba21e6911c3040d3eb0c0cbc2c471f8e7d110cfd3c6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:35:21.752487 kubelet[2632]: E0702 00:35:21.752447 2632 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9282b1be0a3390b817103ba21e6911c3040d3eb0c0cbc2c471f8e7d110cfd3c6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9282b1be0a3390b817103ba21e6911c3040d3eb0c0cbc2c471f8e7d110cfd3c6" Jul 2 00:35:21.752625 kubelet[2632]: E0702 00:35:21.752560 2632 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9282b1be0a3390b817103ba21e6911c3040d3eb0c0cbc2c471f8e7d110cfd3c6"} Jul 2 00:35:21.752625 kubelet[2632]: E0702 00:35:21.752603 2632 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4aa0f677-0725-46c4-8993-0c9903cb9cb0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9282b1be0a3390b817103ba21e6911c3040d3eb0c0cbc2c471f8e7d110cfd3c6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 00:35:21.752775 kubelet[2632]: E0702 00:35:21.752754 2632 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4aa0f677-0725-46c4-8993-0c9903cb9cb0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9282b1be0a3390b817103ba21e6911c3040d3eb0c0cbc2c471f8e7d110cfd3c6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-wmfxs" podUID="4aa0f677-0725-46c4-8993-0c9903cb9cb0" Jul 2 00:35:21.761029 containerd[1444]: time="2024-07-02T00:35:21.760985081Z" level=error msg="StopPodSandbox for \"1a0b01236b28779688961a6e5a2e4fd44ee4e181d93e344124e3c88260cd168d\" failed" error="failed to destroy network for sandbox \"1a0b01236b28779688961a6e5a2e4fd44ee4e181d93e344124e3c88260cd168d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:35:21.761605 kubelet[2632]: E0702 00:35:21.761305 2632 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1a0b01236b28779688961a6e5a2e4fd44ee4e181d93e344124e3c88260cd168d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1a0b01236b28779688961a6e5a2e4fd44ee4e181d93e344124e3c88260cd168d" Jul 2 00:35:21.761605 kubelet[2632]: E0702 00:35:21.761339 2632 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1a0b01236b28779688961a6e5a2e4fd44ee4e181d93e344124e3c88260cd168d"} Jul 2 00:35:21.761605 kubelet[2632]: E0702 00:35:21.761394 2632 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f76e7cb0-2dbd-4d82-8219-6278834c7267\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1a0b01236b28779688961a6e5a2e4fd44ee4e181d93e344124e3c88260cd168d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 00:35:21.761605 kubelet[2632]: E0702 00:35:21.761427 2632 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f76e7cb0-2dbd-4d82-8219-6278834c7267\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1a0b01236b28779688961a6e5a2e4fd44ee4e181d93e344124e3c88260cd168d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-q8tqz" podUID="f76e7cb0-2dbd-4d82-8219-6278834c7267" Jul 2 00:35:31.129126 kubelet[2632]: I0702 00:35:31.128271 2632 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 2 00:35:33.350818 containerd[1444]: time="2024-07-02T00:35:33.350726352Z" level=info msg="StopPodSandbox for \"1a0b01236b28779688961a6e5a2e4fd44ee4e181d93e344124e3c88260cd168d\"" Jul 2 00:35:33.355260 containerd[1444]: time="2024-07-02T00:35:33.355188416Z" level=info msg="StopPodSandbox for \"b798bd5b0c7f3a6d3482321246f594534f1e455e2ba95619bc9840495362877c\"" Jul 2 00:35:33.433285 containerd[1444]: time="2024-07-02T00:35:33.433079564Z" level=error msg="StopPodSandbox for \"1a0b01236b28779688961a6e5a2e4fd44ee4e181d93e344124e3c88260cd168d\" failed" error="failed to destroy network for sandbox \"1a0b01236b28779688961a6e5a2e4fd44ee4e181d93e344124e3c88260cd168d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:35:33.434887 kubelet[2632]: E0702 00:35:33.434591 2632 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1a0b01236b28779688961a6e5a2e4fd44ee4e181d93e344124e3c88260cd168d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1a0b01236b28779688961a6e5a2e4fd44ee4e181d93e344124e3c88260cd168d" Jul 2 00:35:33.434887 kubelet[2632]: E0702 00:35:33.434775 2632 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1a0b01236b28779688961a6e5a2e4fd44ee4e181d93e344124e3c88260cd168d"} Jul 2 00:35:33.434887 kubelet[2632]: E0702 00:35:33.434823 2632 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f76e7cb0-2dbd-4d82-8219-6278834c7267\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1a0b01236b28779688961a6e5a2e4fd44ee4e181d93e344124e3c88260cd168d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 00:35:33.434887 kubelet[2632]: E0702 00:35:33.434859 2632 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f76e7cb0-2dbd-4d82-8219-6278834c7267\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1a0b01236b28779688961a6e5a2e4fd44ee4e181d93e344124e3c88260cd168d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-q8tqz" podUID="f76e7cb0-2dbd-4d82-8219-6278834c7267" Jul 2 00:35:33.439033 containerd[1444]: time="2024-07-02T00:35:33.438976839Z" level=error msg="StopPodSandbox for \"b798bd5b0c7f3a6d3482321246f594534f1e455e2ba95619bc9840495362877c\" failed" error="failed to destroy network for sandbox \"b798bd5b0c7f3a6d3482321246f594534f1e455e2ba95619bc9840495362877c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:35:33.439449 kubelet[2632]: E0702 00:35:33.439310 2632 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b798bd5b0c7f3a6d3482321246f594534f1e455e2ba95619bc9840495362877c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b798bd5b0c7f3a6d3482321246f594534f1e455e2ba95619bc9840495362877c" Jul 2 00:35:33.439449 kubelet[2632]: E0702 00:35:33.439355 2632 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b798bd5b0c7f3a6d3482321246f594534f1e455e2ba95619bc9840495362877c"} Jul 2 00:35:33.439449 kubelet[2632]: E0702 00:35:33.439397 2632 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f825e197-24d6-43c1-8001-acbd6a4ca977\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b798bd5b0c7f3a6d3482321246f594534f1e455e2ba95619bc9840495362877c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 00:35:33.439449 kubelet[2632]: E0702 00:35:33.439429 2632 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f825e197-24d6-43c1-8001-acbd6a4ca977\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b798bd5b0c7f3a6d3482321246f594534f1e455e2ba95619bc9840495362877c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9ksc4" podUID="f825e197-24d6-43c1-8001-acbd6a4ca977" Jul 2 00:35:35.349926 containerd[1444]: time="2024-07-02T00:35:35.349874078Z" level=info msg="StopPodSandbox for \"9282b1be0a3390b817103ba21e6911c3040d3eb0c0cbc2c471f8e7d110cfd3c6\"" Jul 2 00:35:35.353960 containerd[1444]: time="2024-07-02T00:35:35.353687895Z" level=info msg="StopPodSandbox for \"f91170ed8c902e281c58992e0605f57f6a6f088fb13f4b3dd2c20e2a999f8215\"" Jul 2 00:35:35.439328 containerd[1444]: time="2024-07-02T00:35:35.438230541Z" level=error msg="StopPodSandbox for \"9282b1be0a3390b817103ba21e6911c3040d3eb0c0cbc2c471f8e7d110cfd3c6\" failed" error="failed to destroy network for sandbox \"9282b1be0a3390b817103ba21e6911c3040d3eb0c0cbc2c471f8e7d110cfd3c6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:35:35.439328 containerd[1444]: time="2024-07-02T00:35:35.439283558Z" level=error msg="StopPodSandbox for \"f91170ed8c902e281c58992e0605f57f6a6f088fb13f4b3dd2c20e2a999f8215\" failed" error="failed to destroy network for sandbox \"f91170ed8c902e281c58992e0605f57f6a6f088fb13f4b3dd2c20e2a999f8215\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:35:35.439973 kubelet[2632]: E0702 00:35:35.438865 2632 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9282b1be0a3390b817103ba21e6911c3040d3eb0c0cbc2c471f8e7d110cfd3c6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9282b1be0a3390b817103ba21e6911c3040d3eb0c0cbc2c471f8e7d110cfd3c6" Jul 2 00:35:35.439973 kubelet[2632]: E0702 00:35:35.438912 2632 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9282b1be0a3390b817103ba21e6911c3040d3eb0c0cbc2c471f8e7d110cfd3c6"} Jul 2 00:35:35.439973 kubelet[2632]: E0702 00:35:35.438962 2632 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4aa0f677-0725-46c4-8993-0c9903cb9cb0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9282b1be0a3390b817103ba21e6911c3040d3eb0c0cbc2c471f8e7d110cfd3c6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 00:35:35.439973 kubelet[2632]: E0702 00:35:35.438998 2632 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4aa0f677-0725-46c4-8993-0c9903cb9cb0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9282b1be0a3390b817103ba21e6911c3040d3eb0c0cbc2c471f8e7d110cfd3c6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-wmfxs" podUID="4aa0f677-0725-46c4-8993-0c9903cb9cb0" Jul 2 00:35:35.441198 kubelet[2632]: E0702 00:35:35.439480 2632 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f91170ed8c902e281c58992e0605f57f6a6f088fb13f4b3dd2c20e2a999f8215\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f91170ed8c902e281c58992e0605f57f6a6f088fb13f4b3dd2c20e2a999f8215" Jul 2 00:35:35.441198 kubelet[2632]: E0702 00:35:35.439537 2632 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f91170ed8c902e281c58992e0605f57f6a6f088fb13f4b3dd2c20e2a999f8215"} Jul 2 00:35:35.441198 kubelet[2632]: E0702 00:35:35.439576 2632 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a34c27ef-25a3-4bac-90f8-f587a5b80a52\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f91170ed8c902e281c58992e0605f57f6a6f088fb13f4b3dd2c20e2a999f8215\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 00:35:35.441198 kubelet[2632]: E0702 00:35:35.439626 2632 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a34c27ef-25a3-4bac-90f8-f587a5b80a52\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f91170ed8c902e281c58992e0605f57f6a6f088fb13f4b3dd2c20e2a999f8215\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7548bf8497-6bsv7" podUID="a34c27ef-25a3-4bac-90f8-f587a5b80a52" Jul 2 00:35:44.349092 containerd[1444]: time="2024-07-02T00:35:44.348916059Z" level=info msg="StopPodSandbox for \"b798bd5b0c7f3a6d3482321246f594534f1e455e2ba95619bc9840495362877c\"" Jul 2 00:35:44.473369 containerd[1444]: time="2024-07-02T00:35:44.472966917Z" level=error msg="StopPodSandbox for \"b798bd5b0c7f3a6d3482321246f594534f1e455e2ba95619bc9840495362877c\" failed" error="failed to destroy network for sandbox \"b798bd5b0c7f3a6d3482321246f594534f1e455e2ba95619bc9840495362877c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:35:44.473955 kubelet[2632]: E0702 00:35:44.473569 2632 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b798bd5b0c7f3a6d3482321246f594534f1e455e2ba95619bc9840495362877c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b798bd5b0c7f3a6d3482321246f594534f1e455e2ba95619bc9840495362877c" Jul 2 00:35:44.473955 kubelet[2632]: E0702 00:35:44.473721 2632 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b798bd5b0c7f3a6d3482321246f594534f1e455e2ba95619bc9840495362877c"} Jul 2 00:35:44.473955 kubelet[2632]: E0702 00:35:44.473882 2632 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f825e197-24d6-43c1-8001-acbd6a4ca977\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b798bd5b0c7f3a6d3482321246f594534f1e455e2ba95619bc9840495362877c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 00:35:44.473955 kubelet[2632]: E0702 00:35:44.473923 2632 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f825e197-24d6-43c1-8001-acbd6a4ca977\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b798bd5b0c7f3a6d3482321246f594534f1e455e2ba95619bc9840495362877c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9ksc4" podUID="f825e197-24d6-43c1-8001-acbd6a4ca977" Jul 2 00:35:46.347652 containerd[1444]: time="2024-07-02T00:35:46.347613137Z" level=info msg="StopPodSandbox for \"f91170ed8c902e281c58992e0605f57f6a6f088fb13f4b3dd2c20e2a999f8215\"" Jul 2 00:35:46.413001 containerd[1444]: time="2024-07-02T00:35:46.412678424Z" level=error msg="StopPodSandbox for \"f91170ed8c902e281c58992e0605f57f6a6f088fb13f4b3dd2c20e2a999f8215\" failed" error="failed to destroy network for sandbox \"f91170ed8c902e281c58992e0605f57f6a6f088fb13f4b3dd2c20e2a999f8215\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:35:46.413259 kubelet[2632]: E0702 00:35:46.412958 2632 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f91170ed8c902e281c58992e0605f57f6a6f088fb13f4b3dd2c20e2a999f8215\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f91170ed8c902e281c58992e0605f57f6a6f088fb13f4b3dd2c20e2a999f8215" Jul 2 00:35:46.413259 kubelet[2632]: E0702 00:35:46.413018 2632 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f91170ed8c902e281c58992e0605f57f6a6f088fb13f4b3dd2c20e2a999f8215"} Jul 2 00:35:46.413259 kubelet[2632]: E0702 00:35:46.413108 2632 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a34c27ef-25a3-4bac-90f8-f587a5b80a52\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f91170ed8c902e281c58992e0605f57f6a6f088fb13f4b3dd2c20e2a999f8215\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 00:35:46.413259 kubelet[2632]: E0702 00:35:46.413148 2632 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a34c27ef-25a3-4bac-90f8-f587a5b80a52\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f91170ed8c902e281c58992e0605f57f6a6f088fb13f4b3dd2c20e2a999f8215\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7548bf8497-6bsv7" podUID="a34c27ef-25a3-4bac-90f8-f587a5b80a52" Jul 2 00:35:47.007292 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2579291578.mount: Deactivated successfully. Jul 2 00:35:47.202538 containerd[1444]: time="2024-07-02T00:35:47.202200887Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:35:47.205970 containerd[1444]: time="2024-07-02T00:35:47.205292218Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=115238750" Jul 2 00:35:47.205970 containerd[1444]: time="2024-07-02T00:35:47.205866087Z" level=info msg="ImageCreate event name:\"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:35:47.210972 containerd[1444]: time="2024-07-02T00:35:47.210913040Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:35:47.211648 containerd[1444]: time="2024-07-02T00:35:47.211564767Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"115238612\" in 26.670019412s" Jul 2 00:35:47.211648 containerd[1444]: time="2024-07-02T00:35:47.211606891Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\"" Jul 2 00:35:47.305512 containerd[1444]: time="2024-07-02T00:35:47.305262842Z" level=info msg="CreateContainer within sandbox \"20d5de4acba00361d37369105f74a979d943640f30ae5726d85a668381aeeea9\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 2 00:35:47.351575 containerd[1444]: time="2024-07-02T00:35:47.351483960Z" level=info msg="StopPodSandbox for \"9282b1be0a3390b817103ba21e6911c3040d3eb0c0cbc2c471f8e7d110cfd3c6\"" Jul 2 00:35:47.369614 containerd[1444]: time="2024-07-02T00:35:47.369440453Z" level=info msg="CreateContainer within sandbox \"20d5de4acba00361d37369105f74a979d943640f30ae5726d85a668381aeeea9\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"57c9b1c0250aa099e687d112de8a63ad8c3dfffc3aeab5b6144e19df943864d7\"" Jul 2 00:35:47.387018 containerd[1444]: time="2024-07-02T00:35:47.386845074Z" level=info msg="StartContainer for \"57c9b1c0250aa099e687d112de8a63ad8c3dfffc3aeab5b6144e19df943864d7\"" Jul 2 00:35:47.414524 containerd[1444]: time="2024-07-02T00:35:47.414414927Z" level=error msg="StopPodSandbox for \"9282b1be0a3390b817103ba21e6911c3040d3eb0c0cbc2c471f8e7d110cfd3c6\" failed" error="failed to destroy network for sandbox \"9282b1be0a3390b817103ba21e6911c3040d3eb0c0cbc2c471f8e7d110cfd3c6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:35:47.414743 kubelet[2632]: E0702 00:35:47.414709 2632 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9282b1be0a3390b817103ba21e6911c3040d3eb0c0cbc2c471f8e7d110cfd3c6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9282b1be0a3390b817103ba21e6911c3040d3eb0c0cbc2c471f8e7d110cfd3c6" Jul 2 00:35:47.415044 kubelet[2632]: E0702 00:35:47.414771 2632 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9282b1be0a3390b817103ba21e6911c3040d3eb0c0cbc2c471f8e7d110cfd3c6"} Jul 2 00:35:47.415044 kubelet[2632]: E0702 00:35:47.414825 2632 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4aa0f677-0725-46c4-8993-0c9903cb9cb0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9282b1be0a3390b817103ba21e6911c3040d3eb0c0cbc2c471f8e7d110cfd3c6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 00:35:47.415044 kubelet[2632]: E0702 00:35:47.414865 2632 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4aa0f677-0725-46c4-8993-0c9903cb9cb0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9282b1be0a3390b817103ba21e6911c3040d3eb0c0cbc2c471f8e7d110cfd3c6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-wmfxs" podUID="4aa0f677-0725-46c4-8993-0c9903cb9cb0" Jul 2 00:35:47.487364 systemd[1]: Started cri-containerd-57c9b1c0250aa099e687d112de8a63ad8c3dfffc3aeab5b6144e19df943864d7.scope - libcontainer container 57c9b1c0250aa099e687d112de8a63ad8c3dfffc3aeab5b6144e19df943864d7. Jul 2 00:35:47.549117 containerd[1444]: time="2024-07-02T00:35:47.548146490Z" level=info msg="StartContainer for \"57c9b1c0250aa099e687d112de8a63ad8c3dfffc3aeab5b6144e19df943864d7\" returns successfully" Jul 2 00:35:47.681402 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 2 00:35:47.687937 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 2 00:35:48.349039 containerd[1444]: time="2024-07-02T00:35:48.348625025Z" level=info msg="StopPodSandbox for \"1a0b01236b28779688961a6e5a2e4fd44ee4e181d93e344124e3c88260cd168d\"" Jul 2 00:35:48.490986 kubelet[2632]: I0702 00:35:48.490881 2632 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-4mjsk" podStartSLOduration=3.294632337 podStartE2EDuration="44.454874968s" podCreationTimestamp="2024-07-02 00:35:04 +0000 UTC" firstStartedPulling="2024-07-02 00:35:06.051800357 +0000 UTC m=+22.935044304" lastFinishedPulling="2024-07-02 00:35:47.212042988 +0000 UTC m=+64.095286935" observedRunningTime="2024-07-02 00:35:47.87681744 +0000 UTC m=+64.760061387" watchObservedRunningTime="2024-07-02 00:35:48.454874968 +0000 UTC m=+65.338118965" Jul 2 00:35:48.859004 systemd[1]: run-containerd-runc-k8s.io-57c9b1c0250aa099e687d112de8a63ad8c3dfffc3aeab5b6144e19df943864d7-runc.rTc6Co.mount: Deactivated successfully. Jul 2 00:35:49.400414 containerd[1444]: 2024-07-02 00:35:48.455 [INFO][3807] k8s.go 608: Cleaning up netns ContainerID="1a0b01236b28779688961a6e5a2e4fd44ee4e181d93e344124e3c88260cd168d" Jul 2 00:35:49.400414 containerd[1444]: 2024-07-02 00:35:48.456 [INFO][3807] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="1a0b01236b28779688961a6e5a2e4fd44ee4e181d93e344124e3c88260cd168d" iface="eth0" netns="/var/run/netns/cni-58693c33-e579-b66a-9954-f07d6d03f5d9" Jul 2 00:35:49.400414 containerd[1444]: 2024-07-02 00:35:48.462 [INFO][3807] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="1a0b01236b28779688961a6e5a2e4fd44ee4e181d93e344124e3c88260cd168d" iface="eth0" netns="/var/run/netns/cni-58693c33-e579-b66a-9954-f07d6d03f5d9" Jul 2 00:35:49.400414 containerd[1444]: 2024-07-02 00:35:48.465 [INFO][3807] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="1a0b01236b28779688961a6e5a2e4fd44ee4e181d93e344124e3c88260cd168d" iface="eth0" netns="/var/run/netns/cni-58693c33-e579-b66a-9954-f07d6d03f5d9" Jul 2 00:35:49.400414 containerd[1444]: 2024-07-02 00:35:48.465 [INFO][3807] k8s.go 615: Releasing IP address(es) ContainerID="1a0b01236b28779688961a6e5a2e4fd44ee4e181d93e344124e3c88260cd168d" Jul 2 00:35:49.400414 containerd[1444]: 2024-07-02 00:35:48.466 [INFO][3807] utils.go 188: Calico CNI releasing IP address ContainerID="1a0b01236b28779688961a6e5a2e4fd44ee4e181d93e344124e3c88260cd168d" Jul 2 00:35:49.400414 containerd[1444]: 2024-07-02 00:35:49.349 [INFO][3813] ipam_plugin.go 411: Releasing address using handleID ContainerID="1a0b01236b28779688961a6e5a2e4fd44ee4e181d93e344124e3c88260cd168d" HandleID="k8s-pod-network.1a0b01236b28779688961a6e5a2e4fd44ee4e181d93e344124e3c88260cd168d" Workload="ci--3975--1--1--4--69569a1933.novalocal-k8s-coredns--76f75df574--q8tqz-eth0" Jul 2 00:35:49.400414 containerd[1444]: 2024-07-02 00:35:49.352 [INFO][3813] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:35:49.400414 containerd[1444]: 2024-07-02 00:35:49.357 [INFO][3813] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:35:49.400414 containerd[1444]: 2024-07-02 00:35:49.389 [WARNING][3813] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="1a0b01236b28779688961a6e5a2e4fd44ee4e181d93e344124e3c88260cd168d" HandleID="k8s-pod-network.1a0b01236b28779688961a6e5a2e4fd44ee4e181d93e344124e3c88260cd168d" Workload="ci--3975--1--1--4--69569a1933.novalocal-k8s-coredns--76f75df574--q8tqz-eth0" Jul 2 00:35:49.400414 containerd[1444]: 2024-07-02 00:35:49.389 [INFO][3813] ipam_plugin.go 439: Releasing address using workloadID ContainerID="1a0b01236b28779688961a6e5a2e4fd44ee4e181d93e344124e3c88260cd168d" HandleID="k8s-pod-network.1a0b01236b28779688961a6e5a2e4fd44ee4e181d93e344124e3c88260cd168d" Workload="ci--3975--1--1--4--69569a1933.novalocal-k8s-coredns--76f75df574--q8tqz-eth0" Jul 2 00:35:49.400414 containerd[1444]: 2024-07-02 00:35:49.393 [INFO][3813] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:35:49.400414 containerd[1444]: 2024-07-02 00:35:49.395 [INFO][3807] k8s.go 621: Teardown processing complete. ContainerID="1a0b01236b28779688961a6e5a2e4fd44ee4e181d93e344124e3c88260cd168d" Jul 2 00:35:49.400414 containerd[1444]: time="2024-07-02T00:35:49.400316921Z" level=info msg="TearDown network for sandbox \"1a0b01236b28779688961a6e5a2e4fd44ee4e181d93e344124e3c88260cd168d\" successfully" Jul 2 00:35:49.400414 containerd[1444]: time="2024-07-02T00:35:49.400352024Z" level=info msg="StopPodSandbox for \"1a0b01236b28779688961a6e5a2e4fd44ee4e181d93e344124e3c88260cd168d\" returns successfully" Jul 2 00:35:49.407186 systemd[1]: run-netns-cni\x2d58693c33\x2de579\x2db66a\x2d9954\x2df07d6d03f5d9.mount: Deactivated successfully. Jul 2 00:35:49.451668 containerd[1444]: time="2024-07-02T00:35:49.451614414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-q8tqz,Uid:f76e7cb0-2dbd-4d82-8219-6278834c7267,Namespace:kube-system,Attempt:1,}" Jul 2 00:35:50.164140 systemd-networkd[1365]: vxlan.calico: Link UP Jul 2 00:35:50.164149 systemd-networkd[1365]: vxlan.calico: Gained carrier Jul 2 00:35:50.177212 systemd-networkd[1365]: cali9f001cf372d: Link UP Jul 2 00:35:50.178439 systemd-networkd[1365]: cali9f001cf372d: Gained carrier Jul 2 00:35:50.210417 containerd[1444]: 2024-07-02 00:35:49.970 [INFO][3973] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975--1--1--4--69569a1933.novalocal-k8s-coredns--76f75df574--q8tqz-eth0 coredns-76f75df574- kube-system f76e7cb0-2dbd-4d82-8219-6278834c7267 768 0 2024-07-02 00:34:56 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3975-1-1-4-69569a1933.novalocal coredns-76f75df574-q8tqz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9f001cf372d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="7f8ea84f3fb9099b7a2d79ae07206ae1857f9e84bb6162883874c109b94d0e63" Namespace="kube-system" Pod="coredns-76f75df574-q8tqz" WorkloadEndpoint="ci--3975--1--1--4--69569a1933.novalocal-k8s-coredns--76f75df574--q8tqz-" Jul 2 00:35:50.210417 containerd[1444]: 2024-07-02 00:35:49.971 [INFO][3973] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7f8ea84f3fb9099b7a2d79ae07206ae1857f9e84bb6162883874c109b94d0e63" Namespace="kube-system" Pod="coredns-76f75df574-q8tqz" WorkloadEndpoint="ci--3975--1--1--4--69569a1933.novalocal-k8s-coredns--76f75df574--q8tqz-eth0" Jul 2 00:35:50.210417 containerd[1444]: 2024-07-02 00:35:50.054 [INFO][4000] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7f8ea84f3fb9099b7a2d79ae07206ae1857f9e84bb6162883874c109b94d0e63" HandleID="k8s-pod-network.7f8ea84f3fb9099b7a2d79ae07206ae1857f9e84bb6162883874c109b94d0e63" Workload="ci--3975--1--1--4--69569a1933.novalocal-k8s-coredns--76f75df574--q8tqz-eth0" Jul 2 00:35:50.210417 containerd[1444]: 2024-07-02 00:35:50.075 [INFO][4000] ipam_plugin.go 264: Auto assigning IP ContainerID="7f8ea84f3fb9099b7a2d79ae07206ae1857f9e84bb6162883874c109b94d0e63" HandleID="k8s-pod-network.7f8ea84f3fb9099b7a2d79ae07206ae1857f9e84bb6162883874c109b94d0e63" Workload="ci--3975--1--1--4--69569a1933.novalocal-k8s-coredns--76f75df574--q8tqz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00059da90), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3975-1-1-4-69569a1933.novalocal", "pod":"coredns-76f75df574-q8tqz", "timestamp":"2024-07-02 00:35:50.054923375 +0000 UTC"}, Hostname:"ci-3975-1-1-4-69569a1933.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 00:35:50.210417 containerd[1444]: 2024-07-02 00:35:50.075 [INFO][4000] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:35:50.210417 containerd[1444]: 2024-07-02 00:35:50.075 [INFO][4000] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:35:50.210417 containerd[1444]: 2024-07-02 00:35:50.075 [INFO][4000] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975-1-1-4-69569a1933.novalocal' Jul 2 00:35:50.210417 containerd[1444]: 2024-07-02 00:35:50.081 [INFO][4000] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7f8ea84f3fb9099b7a2d79ae07206ae1857f9e84bb6162883874c109b94d0e63" host="ci-3975-1-1-4-69569a1933.novalocal" Jul 2 00:35:50.210417 containerd[1444]: 2024-07-02 00:35:50.110 [INFO][4000] ipam.go 372: Looking up existing affinities for host host="ci-3975-1-1-4-69569a1933.novalocal" Jul 2 00:35:50.210417 containerd[1444]: 2024-07-02 00:35:50.115 [INFO][4000] ipam.go 489: Trying affinity for 192.168.36.192/26 host="ci-3975-1-1-4-69569a1933.novalocal" Jul 2 00:35:50.210417 containerd[1444]: 2024-07-02 00:35:50.117 [INFO][4000] ipam.go 155: Attempting to load block cidr=192.168.36.192/26 host="ci-3975-1-1-4-69569a1933.novalocal" Jul 2 00:35:50.210417 containerd[1444]: 2024-07-02 00:35:50.120 [INFO][4000] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.36.192/26 host="ci-3975-1-1-4-69569a1933.novalocal" Jul 2 00:35:50.210417 containerd[1444]: 2024-07-02 00:35:50.120 [INFO][4000] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.36.192/26 handle="k8s-pod-network.7f8ea84f3fb9099b7a2d79ae07206ae1857f9e84bb6162883874c109b94d0e63" host="ci-3975-1-1-4-69569a1933.novalocal" Jul 2 00:35:50.210417 containerd[1444]: 2024-07-02 00:35:50.126 [INFO][4000] ipam.go 1685: Creating new handle: k8s-pod-network.7f8ea84f3fb9099b7a2d79ae07206ae1857f9e84bb6162883874c109b94d0e63 Jul 2 00:35:50.210417 containerd[1444]: 2024-07-02 00:35:50.136 [INFO][4000] ipam.go 1203: Writing block in order to claim IPs block=192.168.36.192/26 handle="k8s-pod-network.7f8ea84f3fb9099b7a2d79ae07206ae1857f9e84bb6162883874c109b94d0e63" host="ci-3975-1-1-4-69569a1933.novalocal" Jul 2 00:35:50.210417 containerd[1444]: 2024-07-02 00:35:50.147 [INFO][4000] ipam.go 1216: Successfully claimed IPs: [192.168.36.193/26] block=192.168.36.192/26 handle="k8s-pod-network.7f8ea84f3fb9099b7a2d79ae07206ae1857f9e84bb6162883874c109b94d0e63" host="ci-3975-1-1-4-69569a1933.novalocal" Jul 2 00:35:50.210417 containerd[1444]: 2024-07-02 00:35:50.147 [INFO][4000] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.36.193/26] handle="k8s-pod-network.7f8ea84f3fb9099b7a2d79ae07206ae1857f9e84bb6162883874c109b94d0e63" host="ci-3975-1-1-4-69569a1933.novalocal" Jul 2 00:35:50.210417 containerd[1444]: 2024-07-02 00:35:50.147 [INFO][4000] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:35:50.210417 containerd[1444]: 2024-07-02 00:35:50.147 [INFO][4000] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.36.193/26] IPv6=[] ContainerID="7f8ea84f3fb9099b7a2d79ae07206ae1857f9e84bb6162883874c109b94d0e63" HandleID="k8s-pod-network.7f8ea84f3fb9099b7a2d79ae07206ae1857f9e84bb6162883874c109b94d0e63" Workload="ci--3975--1--1--4--69569a1933.novalocal-k8s-coredns--76f75df574--q8tqz-eth0" Jul 2 00:35:50.211322 containerd[1444]: 2024-07-02 00:35:50.154 [INFO][3973] k8s.go 386: Populated endpoint ContainerID="7f8ea84f3fb9099b7a2d79ae07206ae1857f9e84bb6162883874c109b94d0e63" Namespace="kube-system" Pod="coredns-76f75df574-q8tqz" WorkloadEndpoint="ci--3975--1--1--4--69569a1933.novalocal-k8s-coredns--76f75df574--q8tqz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--1--1--4--69569a1933.novalocal-k8s-coredns--76f75df574--q8tqz-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"f76e7cb0-2dbd-4d82-8219-6278834c7267", ResourceVersion:"768", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 34, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-1-1-4-69569a1933.novalocal", ContainerID:"", Pod:"coredns-76f75df574-q8tqz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.36.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9f001cf372d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:35:50.211322 containerd[1444]: 2024-07-02 00:35:50.155 [INFO][3973] k8s.go 387: Calico CNI using IPs: [192.168.36.193/32] ContainerID="7f8ea84f3fb9099b7a2d79ae07206ae1857f9e84bb6162883874c109b94d0e63" Namespace="kube-system" Pod="coredns-76f75df574-q8tqz" WorkloadEndpoint="ci--3975--1--1--4--69569a1933.novalocal-k8s-coredns--76f75df574--q8tqz-eth0" Jul 2 00:35:50.211322 containerd[1444]: 2024-07-02 00:35:50.156 [INFO][3973] dataplane_linux.go 68: Setting the host side veth name to cali9f001cf372d ContainerID="7f8ea84f3fb9099b7a2d79ae07206ae1857f9e84bb6162883874c109b94d0e63" Namespace="kube-system" Pod="coredns-76f75df574-q8tqz" WorkloadEndpoint="ci--3975--1--1--4--69569a1933.novalocal-k8s-coredns--76f75df574--q8tqz-eth0" Jul 2 00:35:50.211322 containerd[1444]: 2024-07-02 00:35:50.179 [INFO][3973] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="7f8ea84f3fb9099b7a2d79ae07206ae1857f9e84bb6162883874c109b94d0e63" Namespace="kube-system" Pod="coredns-76f75df574-q8tqz" WorkloadEndpoint="ci--3975--1--1--4--69569a1933.novalocal-k8s-coredns--76f75df574--q8tqz-eth0" Jul 2 00:35:50.211322 containerd[1444]: 2024-07-02 00:35:50.180 [INFO][3973] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7f8ea84f3fb9099b7a2d79ae07206ae1857f9e84bb6162883874c109b94d0e63" Namespace="kube-system" Pod="coredns-76f75df574-q8tqz" WorkloadEndpoint="ci--3975--1--1--4--69569a1933.novalocal-k8s-coredns--76f75df574--q8tqz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--1--1--4--69569a1933.novalocal-k8s-coredns--76f75df574--q8tqz-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"f76e7cb0-2dbd-4d82-8219-6278834c7267", ResourceVersion:"768", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 34, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-1-1-4-69569a1933.novalocal", ContainerID:"7f8ea84f3fb9099b7a2d79ae07206ae1857f9e84bb6162883874c109b94d0e63", Pod:"coredns-76f75df574-q8tqz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.36.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9f001cf372d", MAC:"9a:41:04:3f:5e:33", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:35:50.211322 containerd[1444]: 2024-07-02 00:35:50.201 [INFO][3973] k8s.go 500: Wrote updated endpoint to datastore ContainerID="7f8ea84f3fb9099b7a2d79ae07206ae1857f9e84bb6162883874c109b94d0e63" Namespace="kube-system" Pod="coredns-76f75df574-q8tqz" WorkloadEndpoint="ci--3975--1--1--4--69569a1933.novalocal-k8s-coredns--76f75df574--q8tqz-eth0" Jul 2 00:35:50.318122 containerd[1444]: time="2024-07-02T00:35:50.317238045Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:35:50.318122 containerd[1444]: time="2024-07-02T00:35:50.317437259Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:35:50.318122 containerd[1444]: time="2024-07-02T00:35:50.317511461Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:35:50.318122 containerd[1444]: time="2024-07-02T00:35:50.317785307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:35:50.361266 systemd[1]: Started cri-containerd-7f8ea84f3fb9099b7a2d79ae07206ae1857f9e84bb6162883874c109b94d0e63.scope - libcontainer container 7f8ea84f3fb9099b7a2d79ae07206ae1857f9e84bb6162883874c109b94d0e63. Jul 2 00:35:50.432321 containerd[1444]: time="2024-07-02T00:35:50.432141746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-q8tqz,Uid:f76e7cb0-2dbd-4d82-8219-6278834c7267,Namespace:kube-system,Attempt:1,} returns sandbox id \"7f8ea84f3fb9099b7a2d79ae07206ae1857f9e84bb6162883874c109b94d0e63\"" Jul 2 00:35:50.442600 containerd[1444]: time="2024-07-02T00:35:50.442274552Z" level=info msg="CreateContainer within sandbox \"7f8ea84f3fb9099b7a2d79ae07206ae1857f9e84bb6162883874c109b94d0e63\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 00:35:50.483463 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount335314086.mount: Deactivated successfully. Jul 2 00:35:50.500303 containerd[1444]: time="2024-07-02T00:35:50.500189458Z" level=info msg="CreateContainer within sandbox \"7f8ea84f3fb9099b7a2d79ae07206ae1857f9e84bb6162883874c109b94d0e63\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d312b3cf5ac80abc2ec0b4ac291286e4e87314083931a5d89c1722c3408d4968\"" Jul 2 00:35:50.501113 containerd[1444]: time="2024-07-02T00:35:50.501047972Z" level=info msg="StartContainer for \"d312b3cf5ac80abc2ec0b4ac291286e4e87314083931a5d89c1722c3408d4968\"" Jul 2 00:35:50.539271 systemd[1]: Started cri-containerd-d312b3cf5ac80abc2ec0b4ac291286e4e87314083931a5d89c1722c3408d4968.scope - libcontainer container d312b3cf5ac80abc2ec0b4ac291286e4e87314083931a5d89c1722c3408d4968. Jul 2 00:35:50.579110 containerd[1444]: time="2024-07-02T00:35:50.579053892Z" level=info msg="StartContainer for \"d312b3cf5ac80abc2ec0b4ac291286e4e87314083931a5d89c1722c3408d4968\" returns successfully" Jul 2 00:35:50.852872 kubelet[2632]: I0702 00:35:50.852802 2632 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-q8tqz" podStartSLOduration=54.852726862 podStartE2EDuration="54.852726862s" podCreationTimestamp="2024-07-02 00:34:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:35:50.851488426 +0000 UTC m=+67.734732383" watchObservedRunningTime="2024-07-02 00:35:50.852726862 +0000 UTC m=+67.735970810" Jul 2 00:35:51.297553 systemd-networkd[1365]: cali9f001cf372d: Gained IPv6LL Jul 2 00:35:52.194753 systemd-networkd[1365]: vxlan.calico: Gained IPv6LL Jul 2 00:35:55.352242 containerd[1444]: time="2024-07-02T00:35:55.351476922Z" level=info msg="StopPodSandbox for \"b798bd5b0c7f3a6d3482321246f594534f1e455e2ba95619bc9840495362877c\"" Jul 2 00:35:55.520114 containerd[1444]: 2024-07-02 00:35:55.467 [INFO][4186] k8s.go 608: Cleaning up netns ContainerID="b798bd5b0c7f3a6d3482321246f594534f1e455e2ba95619bc9840495362877c" Jul 2 00:35:55.520114 containerd[1444]: 2024-07-02 00:35:55.467 [INFO][4186] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="b798bd5b0c7f3a6d3482321246f594534f1e455e2ba95619bc9840495362877c" iface="eth0" netns="/var/run/netns/cni-d6d160cc-ae7e-e81e-613c-c0f0e8543603" Jul 2 00:35:55.520114 containerd[1444]: 2024-07-02 00:35:55.468 [INFO][4186] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="b798bd5b0c7f3a6d3482321246f594534f1e455e2ba95619bc9840495362877c" iface="eth0" netns="/var/run/netns/cni-d6d160cc-ae7e-e81e-613c-c0f0e8543603" Jul 2 00:35:55.520114 containerd[1444]: 2024-07-02 00:35:55.468 [INFO][4186] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="b798bd5b0c7f3a6d3482321246f594534f1e455e2ba95619bc9840495362877c" iface="eth0" netns="/var/run/netns/cni-d6d160cc-ae7e-e81e-613c-c0f0e8543603" Jul 2 00:35:55.520114 containerd[1444]: 2024-07-02 00:35:55.468 [INFO][4186] k8s.go 615: Releasing IP address(es) ContainerID="b798bd5b0c7f3a6d3482321246f594534f1e455e2ba95619bc9840495362877c" Jul 2 00:35:55.520114 containerd[1444]: 2024-07-02 00:35:55.468 [INFO][4186] utils.go 188: Calico CNI releasing IP address ContainerID="b798bd5b0c7f3a6d3482321246f594534f1e455e2ba95619bc9840495362877c" Jul 2 00:35:55.520114 containerd[1444]: 2024-07-02 00:35:55.504 [INFO][4199] ipam_plugin.go 411: Releasing address using handleID ContainerID="b798bd5b0c7f3a6d3482321246f594534f1e455e2ba95619bc9840495362877c" HandleID="k8s-pod-network.b798bd5b0c7f3a6d3482321246f594534f1e455e2ba95619bc9840495362877c" Workload="ci--3975--1--1--4--69569a1933.novalocal-k8s-csi--node--driver--9ksc4-eth0" Jul 2 00:35:55.520114 containerd[1444]: 2024-07-02 00:35:55.504 [INFO][4199] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:35:55.520114 containerd[1444]: 2024-07-02 00:35:55.504 [INFO][4199] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:35:55.520114 containerd[1444]: 2024-07-02 00:35:55.511 [WARNING][4199] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="b798bd5b0c7f3a6d3482321246f594534f1e455e2ba95619bc9840495362877c" HandleID="k8s-pod-network.b798bd5b0c7f3a6d3482321246f594534f1e455e2ba95619bc9840495362877c" Workload="ci--3975--1--1--4--69569a1933.novalocal-k8s-csi--node--driver--9ksc4-eth0" Jul 2 00:35:55.520114 containerd[1444]: 2024-07-02 00:35:55.512 [INFO][4199] ipam_plugin.go 439: Releasing address using workloadID ContainerID="b798bd5b0c7f3a6d3482321246f594534f1e455e2ba95619bc9840495362877c" HandleID="k8s-pod-network.b798bd5b0c7f3a6d3482321246f594534f1e455e2ba95619bc9840495362877c" Workload="ci--3975--1--1--4--69569a1933.novalocal-k8s-csi--node--driver--9ksc4-eth0" Jul 2 00:35:55.520114 containerd[1444]: 2024-07-02 00:35:55.513 [INFO][4199] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:35:55.520114 containerd[1444]: 2024-07-02 00:35:55.515 [INFO][4186] k8s.go 621: Teardown processing complete. ContainerID="b798bd5b0c7f3a6d3482321246f594534f1e455e2ba95619bc9840495362877c" Jul 2 00:35:55.521747 containerd[1444]: time="2024-07-02T00:35:55.520805880Z" level=info msg="TearDown network for sandbox \"b798bd5b0c7f3a6d3482321246f594534f1e455e2ba95619bc9840495362877c\" successfully" Jul 2 00:35:55.521747 containerd[1444]: time="2024-07-02T00:35:55.520840704Z" level=info msg="StopPodSandbox for \"b798bd5b0c7f3a6d3482321246f594534f1e455e2ba95619bc9840495362877c\" returns successfully" Jul 2 00:35:55.522752 containerd[1444]: time="2024-07-02T00:35:55.522090607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9ksc4,Uid:f825e197-24d6-43c1-8001-acbd6a4ca977,Namespace:calico-system,Attempt:1,}" Jul 2 00:35:55.521897 systemd[1]: run-netns-cni\x2dd6d160cc\x2dae7e\x2de81e\x2d613c\x2dc0f0e8543603.mount: Deactivated successfully. Jul 2 00:35:55.677294 systemd-networkd[1365]: cali1931376c1a8: Link UP Jul 2 00:35:55.678048 systemd-networkd[1365]: cali1931376c1a8: Gained carrier Jul 2 00:35:55.698480 containerd[1444]: 2024-07-02 00:35:55.590 [INFO][4206] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975--1--1--4--69569a1933.novalocal-k8s-csi--node--driver--9ksc4-eth0 csi-node-driver- calico-system f825e197-24d6-43c1-8001-acbd6a4ca977 800 0 2024-07-02 00:35:04 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7d7f6c786c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ci-3975-1-1-4-69569a1933.novalocal csi-node-driver-9ksc4 eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali1931376c1a8 [] []}} ContainerID="d6a6c44db44eecc43ded1cbc7c36325205f99310c206c8caab1bfda17b68b11a" Namespace="calico-system" Pod="csi-node-driver-9ksc4" WorkloadEndpoint="ci--3975--1--1--4--69569a1933.novalocal-k8s-csi--node--driver--9ksc4-" Jul 2 00:35:55.698480 containerd[1444]: 2024-07-02 00:35:55.590 [INFO][4206] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d6a6c44db44eecc43ded1cbc7c36325205f99310c206c8caab1bfda17b68b11a" Namespace="calico-system" Pod="csi-node-driver-9ksc4" WorkloadEndpoint="ci--3975--1--1--4--69569a1933.novalocal-k8s-csi--node--driver--9ksc4-eth0" Jul 2 00:35:55.698480 containerd[1444]: 2024-07-02 00:35:55.622 [INFO][4219] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d6a6c44db44eecc43ded1cbc7c36325205f99310c206c8caab1bfda17b68b11a" HandleID="k8s-pod-network.d6a6c44db44eecc43ded1cbc7c36325205f99310c206c8caab1bfda17b68b11a" Workload="ci--3975--1--1--4--69569a1933.novalocal-k8s-csi--node--driver--9ksc4-eth0" Jul 2 00:35:55.698480 containerd[1444]: 2024-07-02 00:35:55.632 [INFO][4219] ipam_plugin.go 264: Auto assigning IP ContainerID="d6a6c44db44eecc43ded1cbc7c36325205f99310c206c8caab1bfda17b68b11a" HandleID="k8s-pod-network.d6a6c44db44eecc43ded1cbc7c36325205f99310c206c8caab1bfda17b68b11a" Workload="ci--3975--1--1--4--69569a1933.novalocal-k8s-csi--node--driver--9ksc4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000267d00), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3975-1-1-4-69569a1933.novalocal", "pod":"csi-node-driver-9ksc4", "timestamp":"2024-07-02 00:35:55.622645236 +0000 UTC"}, Hostname:"ci-3975-1-1-4-69569a1933.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 00:35:55.698480 containerd[1444]: 2024-07-02 00:35:55.633 [INFO][4219] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:35:55.698480 containerd[1444]: 2024-07-02 00:35:55.633 [INFO][4219] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:35:55.698480 containerd[1444]: 2024-07-02 00:35:55.633 [INFO][4219] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975-1-1-4-69569a1933.novalocal' Jul 2 00:35:55.698480 containerd[1444]: 2024-07-02 00:35:55.635 [INFO][4219] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d6a6c44db44eecc43ded1cbc7c36325205f99310c206c8caab1bfda17b68b11a" host="ci-3975-1-1-4-69569a1933.novalocal" Jul 2 00:35:55.698480 containerd[1444]: 2024-07-02 00:35:55.640 [INFO][4219] ipam.go 372: Looking up existing affinities for host host="ci-3975-1-1-4-69569a1933.novalocal" Jul 2 00:35:55.698480 containerd[1444]: 2024-07-02 00:35:55.652 [INFO][4219] ipam.go 489: Trying affinity for 192.168.36.192/26 host="ci-3975-1-1-4-69569a1933.novalocal" Jul 2 00:35:55.698480 containerd[1444]: 2024-07-02 00:35:55.654 [INFO][4219] ipam.go 155: Attempting to load block cidr=192.168.36.192/26 host="ci-3975-1-1-4-69569a1933.novalocal" Jul 2 00:35:55.698480 containerd[1444]: 2024-07-02 00:35:55.656 [INFO][4219] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.36.192/26 host="ci-3975-1-1-4-69569a1933.novalocal" Jul 2 00:35:55.698480 containerd[1444]: 2024-07-02 00:35:55.657 [INFO][4219] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.36.192/26 handle="k8s-pod-network.d6a6c44db44eecc43ded1cbc7c36325205f99310c206c8caab1bfda17b68b11a" host="ci-3975-1-1-4-69569a1933.novalocal" Jul 2 00:35:55.698480 containerd[1444]: 2024-07-02 00:35:55.658 [INFO][4219] ipam.go 1685: Creating new handle: k8s-pod-network.d6a6c44db44eecc43ded1cbc7c36325205f99310c206c8caab1bfda17b68b11a Jul 2 00:35:55.698480 containerd[1444]: 2024-07-02 00:35:55.664 [INFO][4219] ipam.go 1203: Writing block in order to claim IPs block=192.168.36.192/26 handle="k8s-pod-network.d6a6c44db44eecc43ded1cbc7c36325205f99310c206c8caab1bfda17b68b11a" host="ci-3975-1-1-4-69569a1933.novalocal" Jul 2 00:35:55.698480 containerd[1444]: 2024-07-02 00:35:55.670 [INFO][4219] ipam.go 1216: Successfully claimed IPs: [192.168.36.194/26] block=192.168.36.192/26 handle="k8s-pod-network.d6a6c44db44eecc43ded1cbc7c36325205f99310c206c8caab1bfda17b68b11a" host="ci-3975-1-1-4-69569a1933.novalocal" Jul 2 00:35:55.698480 containerd[1444]: 2024-07-02 00:35:55.670 [INFO][4219] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.36.194/26] handle="k8s-pod-network.d6a6c44db44eecc43ded1cbc7c36325205f99310c206c8caab1bfda17b68b11a" host="ci-3975-1-1-4-69569a1933.novalocal" Jul 2 00:35:55.698480 containerd[1444]: 2024-07-02 00:35:55.670 [INFO][4219] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:35:55.698480 containerd[1444]: 2024-07-02 00:35:55.670 [INFO][4219] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.36.194/26] IPv6=[] ContainerID="d6a6c44db44eecc43ded1cbc7c36325205f99310c206c8caab1bfda17b68b11a" HandleID="k8s-pod-network.d6a6c44db44eecc43ded1cbc7c36325205f99310c206c8caab1bfda17b68b11a" Workload="ci--3975--1--1--4--69569a1933.novalocal-k8s-csi--node--driver--9ksc4-eth0" Jul 2 00:35:55.704051 containerd[1444]: 2024-07-02 00:35:55.673 [INFO][4206] k8s.go 386: Populated endpoint ContainerID="d6a6c44db44eecc43ded1cbc7c36325205f99310c206c8caab1bfda17b68b11a" Namespace="calico-system" Pod="csi-node-driver-9ksc4" WorkloadEndpoint="ci--3975--1--1--4--69569a1933.novalocal-k8s-csi--node--driver--9ksc4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--1--1--4--69569a1933.novalocal-k8s-csi--node--driver--9ksc4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f825e197-24d6-43c1-8001-acbd6a4ca977", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 35, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-1-1-4-69569a1933.novalocal", ContainerID:"", Pod:"csi-node-driver-9ksc4", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.36.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali1931376c1a8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:35:55.704051 containerd[1444]: 2024-07-02 00:35:55.673 [INFO][4206] k8s.go 387: Calico CNI using IPs: [192.168.36.194/32] ContainerID="d6a6c44db44eecc43ded1cbc7c36325205f99310c206c8caab1bfda17b68b11a" Namespace="calico-system" Pod="csi-node-driver-9ksc4" WorkloadEndpoint="ci--3975--1--1--4--69569a1933.novalocal-k8s-csi--node--driver--9ksc4-eth0" Jul 2 00:35:55.704051 containerd[1444]: 2024-07-02 00:35:55.673 [INFO][4206] dataplane_linux.go 68: Setting the host side veth name to cali1931376c1a8 ContainerID="d6a6c44db44eecc43ded1cbc7c36325205f99310c206c8caab1bfda17b68b11a" Namespace="calico-system" Pod="csi-node-driver-9ksc4" WorkloadEndpoint="ci--3975--1--1--4--69569a1933.novalocal-k8s-csi--node--driver--9ksc4-eth0" Jul 2 00:35:55.704051 containerd[1444]: 2024-07-02 00:35:55.678 [INFO][4206] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="d6a6c44db44eecc43ded1cbc7c36325205f99310c206c8caab1bfda17b68b11a" Namespace="calico-system" Pod="csi-node-driver-9ksc4" WorkloadEndpoint="ci--3975--1--1--4--69569a1933.novalocal-k8s-csi--node--driver--9ksc4-eth0" Jul 2 00:35:55.704051 containerd[1444]: 2024-07-02 00:35:55.680 [INFO][4206] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d6a6c44db44eecc43ded1cbc7c36325205f99310c206c8caab1bfda17b68b11a" Namespace="calico-system" Pod="csi-node-driver-9ksc4" WorkloadEndpoint="ci--3975--1--1--4--69569a1933.novalocal-k8s-csi--node--driver--9ksc4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--1--1--4--69569a1933.novalocal-k8s-csi--node--driver--9ksc4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f825e197-24d6-43c1-8001-acbd6a4ca977", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 35, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-1-1-4-69569a1933.novalocal", ContainerID:"d6a6c44db44eecc43ded1cbc7c36325205f99310c206c8caab1bfda17b68b11a", Pod:"csi-node-driver-9ksc4", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.36.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali1931376c1a8", MAC:"e6:a5:08:2b:62:63", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:35:55.704051 containerd[1444]: 2024-07-02 00:35:55.691 [INFO][4206] k8s.go 500: Wrote updated endpoint to datastore ContainerID="d6a6c44db44eecc43ded1cbc7c36325205f99310c206c8caab1bfda17b68b11a" Namespace="calico-system" Pod="csi-node-driver-9ksc4" WorkloadEndpoint="ci--3975--1--1--4--69569a1933.novalocal-k8s-csi--node--driver--9ksc4-eth0" Jul 2 00:35:55.739435 containerd[1444]: time="2024-07-02T00:35:55.739142430Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:35:55.739435 containerd[1444]: time="2024-07-02T00:35:55.739216734Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:35:55.739435 containerd[1444]: time="2024-07-02T00:35:55.739266984Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:35:55.739435 containerd[1444]: time="2024-07-02T00:35:55.739294123Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:35:55.766414 systemd[1]: run-containerd-runc-k8s.io-d6a6c44db44eecc43ded1cbc7c36325205f99310c206c8caab1bfda17b68b11a-runc.aqT2MT.mount: Deactivated successfully. Jul 2 00:35:55.776275 systemd[1]: Started cri-containerd-d6a6c44db44eecc43ded1cbc7c36325205f99310c206c8caab1bfda17b68b11a.scope - libcontainer container d6a6c44db44eecc43ded1cbc7c36325205f99310c206c8caab1bfda17b68b11a. Jul 2 00:35:55.810252 containerd[1444]: time="2024-07-02T00:35:55.810200248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9ksc4,Uid:f825e197-24d6-43c1-8001-acbd6a4ca977,Namespace:calico-system,Attempt:1,} returns sandbox id \"d6a6c44db44eecc43ded1cbc7c36325205f99310c206c8caab1bfda17b68b11a\"" Jul 2 00:35:55.835945 containerd[1444]: time="2024-07-02T00:35:55.835836200Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Jul 2 00:35:57.441405 systemd-networkd[1365]: cali1931376c1a8: Gained IPv6LL Jul 2 00:35:58.215019 containerd[1444]: time="2024-07-02T00:35:58.214946037Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:35:58.216892 containerd[1444]: time="2024-07-02T00:35:58.216599412Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7641062" Jul 2 00:35:58.218583 containerd[1444]: time="2024-07-02T00:35:58.218178464Z" level=info msg="ImageCreate event name:\"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:35:58.220903 containerd[1444]: time="2024-07-02T00:35:58.220865626Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:35:58.221630 containerd[1444]: time="2024-07-02T00:35:58.221593100Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"9088822\" in 2.385711989s" Jul 2 00:35:58.221685 containerd[1444]: time="2024-07-02T00:35:58.221646307Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\"" Jul 2 00:35:58.226552 containerd[1444]: time="2024-07-02T00:35:58.226506873Z" level=info msg="CreateContainer within sandbox \"d6a6c44db44eecc43ded1cbc7c36325205f99310c206c8caab1bfda17b68b11a\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 2 00:35:58.254990 containerd[1444]: time="2024-07-02T00:35:58.254940369Z" level=info msg="CreateContainer within sandbox \"d6a6c44db44eecc43ded1cbc7c36325205f99310c206c8caab1bfda17b68b11a\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"f32b17160fc2e00577233fd34e2b90b1403a81e986f386617daacf55f3f3d49a\"" Jul 2 00:35:58.256641 containerd[1444]: time="2024-07-02T00:35:58.255552805Z" level=info msg="StartContainer for \"f32b17160fc2e00577233fd34e2b90b1403a81e986f386617daacf55f3f3d49a\"" Jul 2 00:35:58.297297 systemd[1]: Started cri-containerd-f32b17160fc2e00577233fd34e2b90b1403a81e986f386617daacf55f3f3d49a.scope - libcontainer container f32b17160fc2e00577233fd34e2b90b1403a81e986f386617daacf55f3f3d49a. Jul 2 00:35:58.348145 containerd[1444]: time="2024-07-02T00:35:58.347722704Z" level=info msg="StopPodSandbox for \"9282b1be0a3390b817103ba21e6911c3040d3eb0c0cbc2c471f8e7d110cfd3c6\"" Jul 2 00:35:58.370452 containerd[1444]: time="2024-07-02T00:35:58.370041678Z" level=info msg="StartContainer for \"f32b17160fc2e00577233fd34e2b90b1403a81e986f386617daacf55f3f3d49a\" returns successfully" Jul 2 00:35:58.374173 containerd[1444]: time="2024-07-02T00:35:58.374120113Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Jul 2 00:35:58.489496 systemd[1]: Started sshd@9-172.24.4.39:22-172.24.4.1:55750.service - OpenSSH per-connection server daemon (172.24.4.1:55750). Jul 2 00:35:58.504871 containerd[1444]: 2024-07-02 00:35:58.430 [INFO][4337] k8s.go 608: Cleaning up netns ContainerID="9282b1be0a3390b817103ba21e6911c3040d3eb0c0cbc2c471f8e7d110cfd3c6" Jul 2 00:35:58.504871 containerd[1444]: 2024-07-02 00:35:58.430 [INFO][4337] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="9282b1be0a3390b817103ba21e6911c3040d3eb0c0cbc2c471f8e7d110cfd3c6" iface="eth0" netns="/var/run/netns/cni-202ba362-1efc-7af3-5597-336d812a883a" Jul 2 00:35:58.504871 containerd[1444]: 2024-07-02 00:35:58.432 [INFO][4337] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="9282b1be0a3390b817103ba21e6911c3040d3eb0c0cbc2c471f8e7d110cfd3c6" iface="eth0" netns="/var/run/netns/cni-202ba362-1efc-7af3-5597-336d812a883a" Jul 2 00:35:58.504871 containerd[1444]: 2024-07-02 00:35:58.433 [INFO][4337] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="9282b1be0a3390b817103ba21e6911c3040d3eb0c0cbc2c471f8e7d110cfd3c6" iface="eth0" netns="/var/run/netns/cni-202ba362-1efc-7af3-5597-336d812a883a" Jul 2 00:35:58.504871 containerd[1444]: 2024-07-02 00:35:58.433 [INFO][4337] k8s.go 615: Releasing IP address(es) ContainerID="9282b1be0a3390b817103ba21e6911c3040d3eb0c0cbc2c471f8e7d110cfd3c6" Jul 2 00:35:58.504871 containerd[1444]: 2024-07-02 00:35:58.433 [INFO][4337] utils.go 188: Calico CNI releasing IP address ContainerID="9282b1be0a3390b817103ba21e6911c3040d3eb0c0cbc2c471f8e7d110cfd3c6" Jul 2 00:35:58.504871 containerd[1444]: 2024-07-02 00:35:58.473 [INFO][4344] ipam_plugin.go 411: Releasing address using handleID ContainerID="9282b1be0a3390b817103ba21e6911c3040d3eb0c0cbc2c471f8e7d110cfd3c6" HandleID="k8s-pod-network.9282b1be0a3390b817103ba21e6911c3040d3eb0c0cbc2c471f8e7d110cfd3c6" Workload="ci--3975--1--1--4--69569a1933.novalocal-k8s-coredns--76f75df574--wmfxs-eth0" Jul 2 00:35:58.504871 containerd[1444]: 2024-07-02 00:35:58.473 [INFO][4344] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:35:58.504871 containerd[1444]: 2024-07-02 00:35:58.473 [INFO][4344] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:35:58.504871 containerd[1444]: 2024-07-02 00:35:58.488 [WARNING][4344] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="9282b1be0a3390b817103ba21e6911c3040d3eb0c0cbc2c471f8e7d110cfd3c6" HandleID="k8s-pod-network.9282b1be0a3390b817103ba21e6911c3040d3eb0c0cbc2c471f8e7d110cfd3c6" Workload="ci--3975--1--1--4--69569a1933.novalocal-k8s-coredns--76f75df574--wmfxs-eth0" Jul 2 00:35:58.504871 containerd[1444]: 2024-07-02 00:35:58.488 [INFO][4344] ipam_plugin.go 439: Releasing address using workloadID ContainerID="9282b1be0a3390b817103ba21e6911c3040d3eb0c0cbc2c471f8e7d110cfd3c6" HandleID="k8s-pod-network.9282b1be0a3390b817103ba21e6911c3040d3eb0c0cbc2c471f8e7d110cfd3c6" Workload="ci--3975--1--1--4--69569a1933.novalocal-k8s-coredns--76f75df574--wmfxs-eth0" Jul 2 00:35:58.504871 containerd[1444]: 2024-07-02 00:35:58.494 [INFO][4344] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:35:58.504871 containerd[1444]: 2024-07-02 00:35:58.501 [INFO][4337] k8s.go 621: Teardown processing complete. ContainerID="9282b1be0a3390b817103ba21e6911c3040d3eb0c0cbc2c471f8e7d110cfd3c6" Jul 2 00:35:58.508875 containerd[1444]: time="2024-07-02T00:35:58.507889875Z" level=info msg="TearDown network for sandbox \"9282b1be0a3390b817103ba21e6911c3040d3eb0c0cbc2c471f8e7d110cfd3c6\" successfully" Jul 2 00:35:58.508875 containerd[1444]: time="2024-07-02T00:35:58.507948110Z" level=info msg="StopPodSandbox for \"9282b1be0a3390b817103ba21e6911c3040d3eb0c0cbc2c471f8e7d110cfd3c6\" returns successfully" Jul 2 00:35:58.513244 systemd[1]: run-netns-cni\x2d202ba362\x2d1efc\x2d7af3\x2d5597\x2d336d812a883a.mount: Deactivated successfully. Jul 2 00:35:58.516230 containerd[1444]: time="2024-07-02T00:35:58.514780637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-wmfxs,Uid:4aa0f677-0725-46c4-8993-0c9903cb9cb0,Namespace:kube-system,Attempt:1,}" Jul 2 00:35:58.778489 systemd-networkd[1365]: cali932f553213e: Link UP Jul 2 00:35:58.778698 systemd-networkd[1365]: cali932f553213e: Gained carrier Jul 2 00:35:58.810308 containerd[1444]: 2024-07-02 00:35:58.630 [INFO][4353] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975--1--1--4--69569a1933.novalocal-k8s-coredns--76f75df574--wmfxs-eth0 coredns-76f75df574- kube-system 4aa0f677-0725-46c4-8993-0c9903cb9cb0 850 0 2024-07-02 00:34:56 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3975-1-1-4-69569a1933.novalocal coredns-76f75df574-wmfxs eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali932f553213e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="5ed7c2236068338d00891e37f6eff965c4dc862c17ad600cbea525f6b6627486" Namespace="kube-system" Pod="coredns-76f75df574-wmfxs" WorkloadEndpoint="ci--3975--1--1--4--69569a1933.novalocal-k8s-coredns--76f75df574--wmfxs-" Jul 2 00:35:58.810308 containerd[1444]: 2024-07-02 00:35:58.630 [INFO][4353] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5ed7c2236068338d00891e37f6eff965c4dc862c17ad600cbea525f6b6627486" Namespace="kube-system" Pod="coredns-76f75df574-wmfxs" WorkloadEndpoint="ci--3975--1--1--4--69569a1933.novalocal-k8s-coredns--76f75df574--wmfxs-eth0" Jul 2 00:35:58.810308 containerd[1444]: 2024-07-02 00:35:58.698 [INFO][4364] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5ed7c2236068338d00891e37f6eff965c4dc862c17ad600cbea525f6b6627486" HandleID="k8s-pod-network.5ed7c2236068338d00891e37f6eff965c4dc862c17ad600cbea525f6b6627486" Workload="ci--3975--1--1--4--69569a1933.novalocal-k8s-coredns--76f75df574--wmfxs-eth0" Jul 2 00:35:58.810308 containerd[1444]: 2024-07-02 00:35:58.714 [INFO][4364] ipam_plugin.go 264: Auto assigning IP ContainerID="5ed7c2236068338d00891e37f6eff965c4dc862c17ad600cbea525f6b6627486" HandleID="k8s-pod-network.5ed7c2236068338d00891e37f6eff965c4dc862c17ad600cbea525f6b6627486" Workload="ci--3975--1--1--4--69569a1933.novalocal-k8s-coredns--76f75df574--wmfxs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ca910), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3975-1-1-4-69569a1933.novalocal", "pod":"coredns-76f75df574-wmfxs", "timestamp":"2024-07-02 00:35:58.698090849 +0000 UTC"}, Hostname:"ci-3975-1-1-4-69569a1933.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 00:35:58.810308 containerd[1444]: 2024-07-02 00:35:58.714 [INFO][4364] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:35:58.810308 containerd[1444]: 2024-07-02 00:35:58.714 [INFO][4364] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:35:58.810308 containerd[1444]: 2024-07-02 00:35:58.714 [INFO][4364] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975-1-1-4-69569a1933.novalocal' Jul 2 00:35:58.810308 containerd[1444]: 2024-07-02 00:35:58.718 [INFO][4364] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5ed7c2236068338d00891e37f6eff965c4dc862c17ad600cbea525f6b6627486" host="ci-3975-1-1-4-69569a1933.novalocal" Jul 2 00:35:58.810308 containerd[1444]: 2024-07-02 00:35:58.724 [INFO][4364] ipam.go 372: Looking up existing affinities for host host="ci-3975-1-1-4-69569a1933.novalocal" Jul 2 00:35:58.810308 containerd[1444]: 2024-07-02 00:35:58.730 [INFO][4364] ipam.go 489: Trying affinity for 192.168.36.192/26 host="ci-3975-1-1-4-69569a1933.novalocal" Jul 2 00:35:58.810308 containerd[1444]: 2024-07-02 00:35:58.734 [INFO][4364] ipam.go 155: Attempting to load block cidr=192.168.36.192/26 host="ci-3975-1-1-4-69569a1933.novalocal" Jul 2 00:35:58.810308 containerd[1444]: 2024-07-02 00:35:58.739 [INFO][4364] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.36.192/26 host="ci-3975-1-1-4-69569a1933.novalocal" Jul 2 00:35:58.810308 containerd[1444]: 2024-07-02 00:35:58.739 [INFO][4364] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.36.192/26 handle="k8s-pod-network.5ed7c2236068338d00891e37f6eff965c4dc862c17ad600cbea525f6b6627486" host="ci-3975-1-1-4-69569a1933.novalocal" Jul 2 00:35:58.810308 containerd[1444]: 2024-07-02 00:35:58.742 [INFO][4364] ipam.go 1685: Creating new handle: k8s-pod-network.5ed7c2236068338d00891e37f6eff965c4dc862c17ad600cbea525f6b6627486 Jul 2 00:35:58.810308 containerd[1444]: 2024-07-02 00:35:58.754 [INFO][4364] ipam.go 1203: Writing block in order to claim IPs block=192.168.36.192/26 handle="k8s-pod-network.5ed7c2236068338d00891e37f6eff965c4dc862c17ad600cbea525f6b6627486" host="ci-3975-1-1-4-69569a1933.novalocal" Jul 2 00:35:58.810308 containerd[1444]: 2024-07-02 00:35:58.768 [INFO][4364] ipam.go 1216: Successfully claimed IPs: [192.168.36.195/26] block=192.168.36.192/26 handle="k8s-pod-network.5ed7c2236068338d00891e37f6eff965c4dc862c17ad600cbea525f6b6627486" host="ci-3975-1-1-4-69569a1933.novalocal" Jul 2 00:35:58.810308 containerd[1444]: 2024-07-02 00:35:58.768 [INFO][4364] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.36.195/26] handle="k8s-pod-network.5ed7c2236068338d00891e37f6eff965c4dc862c17ad600cbea525f6b6627486" host="ci-3975-1-1-4-69569a1933.novalocal" Jul 2 00:35:58.810308 containerd[1444]: 2024-07-02 00:35:58.768 [INFO][4364] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:35:58.810308 containerd[1444]: 2024-07-02 00:35:58.768 [INFO][4364] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.36.195/26] IPv6=[] ContainerID="5ed7c2236068338d00891e37f6eff965c4dc862c17ad600cbea525f6b6627486" HandleID="k8s-pod-network.5ed7c2236068338d00891e37f6eff965c4dc862c17ad600cbea525f6b6627486" Workload="ci--3975--1--1--4--69569a1933.novalocal-k8s-coredns--76f75df574--wmfxs-eth0" Jul 2 00:35:58.810940 containerd[1444]: 2024-07-02 00:35:58.773 [INFO][4353] k8s.go 386: Populated endpoint ContainerID="5ed7c2236068338d00891e37f6eff965c4dc862c17ad600cbea525f6b6627486" Namespace="kube-system" Pod="coredns-76f75df574-wmfxs" WorkloadEndpoint="ci--3975--1--1--4--69569a1933.novalocal-k8s-coredns--76f75df574--wmfxs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--1--1--4--69569a1933.novalocal-k8s-coredns--76f75df574--wmfxs-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"4aa0f677-0725-46c4-8993-0c9903cb9cb0", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 34, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-1-1-4-69569a1933.novalocal", ContainerID:"", Pod:"coredns-76f75df574-wmfxs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.36.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali932f553213e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:35:58.810940 containerd[1444]: 2024-07-02 00:35:58.773 [INFO][4353] k8s.go 387: Calico CNI using IPs: [192.168.36.195/32] ContainerID="5ed7c2236068338d00891e37f6eff965c4dc862c17ad600cbea525f6b6627486" Namespace="kube-system" Pod="coredns-76f75df574-wmfxs" WorkloadEndpoint="ci--3975--1--1--4--69569a1933.novalocal-k8s-coredns--76f75df574--wmfxs-eth0" Jul 2 00:35:58.810940 containerd[1444]: 2024-07-02 00:35:58.774 [INFO][4353] dataplane_linux.go 68: Setting the host side veth name to cali932f553213e ContainerID="5ed7c2236068338d00891e37f6eff965c4dc862c17ad600cbea525f6b6627486" Namespace="kube-system" Pod="coredns-76f75df574-wmfxs" WorkloadEndpoint="ci--3975--1--1--4--69569a1933.novalocal-k8s-coredns--76f75df574--wmfxs-eth0" Jul 2 00:35:58.810940 containerd[1444]: 2024-07-02 00:35:58.778 [INFO][4353] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="5ed7c2236068338d00891e37f6eff965c4dc862c17ad600cbea525f6b6627486" Namespace="kube-system" Pod="coredns-76f75df574-wmfxs" WorkloadEndpoint="ci--3975--1--1--4--69569a1933.novalocal-k8s-coredns--76f75df574--wmfxs-eth0" Jul 2 00:35:58.810940 containerd[1444]: 2024-07-02 00:35:58.782 [INFO][4353] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5ed7c2236068338d00891e37f6eff965c4dc862c17ad600cbea525f6b6627486" Namespace="kube-system" Pod="coredns-76f75df574-wmfxs" WorkloadEndpoint="ci--3975--1--1--4--69569a1933.novalocal-k8s-coredns--76f75df574--wmfxs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--1--1--4--69569a1933.novalocal-k8s-coredns--76f75df574--wmfxs-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"4aa0f677-0725-46c4-8993-0c9903cb9cb0", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 34, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-1-1-4-69569a1933.novalocal", ContainerID:"5ed7c2236068338d00891e37f6eff965c4dc862c17ad600cbea525f6b6627486", Pod:"coredns-76f75df574-wmfxs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.36.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali932f553213e", MAC:"ee:84:7e:12:66:5d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:35:58.810940 containerd[1444]: 2024-07-02 00:35:58.801 [INFO][4353] k8s.go 500: Wrote updated endpoint to datastore ContainerID="5ed7c2236068338d00891e37f6eff965c4dc862c17ad600cbea525f6b6627486" Namespace="kube-system" Pod="coredns-76f75df574-wmfxs" WorkloadEndpoint="ci--3975--1--1--4--69569a1933.novalocal-k8s-coredns--76f75df574--wmfxs-eth0" Jul 2 00:35:58.856105 containerd[1444]: time="2024-07-02T00:35:58.854879657Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:35:58.856105 containerd[1444]: time="2024-07-02T00:35:58.854955595Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:35:58.856105 containerd[1444]: time="2024-07-02T00:35:58.854983395Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:35:58.856105 containerd[1444]: time="2024-07-02T00:35:58.855001408Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:35:58.883569 systemd[1]: Started cri-containerd-5ed7c2236068338d00891e37f6eff965c4dc862c17ad600cbea525f6b6627486.scope - libcontainer container 5ed7c2236068338d00891e37f6eff965c4dc862c17ad600cbea525f6b6627486. Jul 2 00:35:58.963484 containerd[1444]: time="2024-07-02T00:35:58.963039193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-wmfxs,Uid:4aa0f677-0725-46c4-8993-0c9903cb9cb0,Namespace:kube-system,Attempt:1,} returns sandbox id \"5ed7c2236068338d00891e37f6eff965c4dc862c17ad600cbea525f6b6627486\"" Jul 2 00:35:58.969512 containerd[1444]: time="2024-07-02T00:35:58.969449859Z" level=info msg="CreateContainer within sandbox \"5ed7c2236068338d00891e37f6eff965c4dc862c17ad600cbea525f6b6627486\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 00:35:58.991638 containerd[1444]: time="2024-07-02T00:35:58.991544217Z" level=info msg="CreateContainer within sandbox \"5ed7c2236068338d00891e37f6eff965c4dc862c17ad600cbea525f6b6627486\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fbda5c370e5d411fc79f43d74d87c90826ce6c358a12c78a7ab46644054547d7\"" Jul 2 00:35:58.993046 containerd[1444]: time="2024-07-02T00:35:58.992502447Z" level=info msg="StartContainer for \"fbda5c370e5d411fc79f43d74d87c90826ce6c358a12c78a7ab46644054547d7\"" Jul 2 00:35:59.022260 systemd[1]: Started cri-containerd-fbda5c370e5d411fc79f43d74d87c90826ce6c358a12c78a7ab46644054547d7.scope - libcontainer container fbda5c370e5d411fc79f43d74d87c90826ce6c358a12c78a7ab46644054547d7. Jul 2 00:35:59.068258 containerd[1444]: time="2024-07-02T00:35:59.067597211Z" level=info msg="StartContainer for \"fbda5c370e5d411fc79f43d74d87c90826ce6c358a12c78a7ab46644054547d7\" returns successfully" Jul 2 00:35:59.356942 containerd[1444]: time="2024-07-02T00:35:59.356374945Z" level=info msg="StopPodSandbox for \"f91170ed8c902e281c58992e0605f57f6a6f088fb13f4b3dd2c20e2a999f8215\"" Jul 2 00:35:59.525915 containerd[1444]: 2024-07-02 00:35:59.478 [INFO][4477] k8s.go 608: Cleaning up netns ContainerID="f91170ed8c902e281c58992e0605f57f6a6f088fb13f4b3dd2c20e2a999f8215" Jul 2 00:35:59.525915 containerd[1444]: 2024-07-02 00:35:59.478 [INFO][4477] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="f91170ed8c902e281c58992e0605f57f6a6f088fb13f4b3dd2c20e2a999f8215" iface="eth0" netns="/var/run/netns/cni-7d3600e9-035e-5244-558f-abd242e4fce8" Jul 2 00:35:59.525915 containerd[1444]: 2024-07-02 00:35:59.479 [INFO][4477] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="f91170ed8c902e281c58992e0605f57f6a6f088fb13f4b3dd2c20e2a999f8215" iface="eth0" netns="/var/run/netns/cni-7d3600e9-035e-5244-558f-abd242e4fce8" Jul 2 00:35:59.525915 containerd[1444]: 2024-07-02 00:35:59.481 [INFO][4477] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="f91170ed8c902e281c58992e0605f57f6a6f088fb13f4b3dd2c20e2a999f8215" iface="eth0" netns="/var/run/netns/cni-7d3600e9-035e-5244-558f-abd242e4fce8" Jul 2 00:35:59.525915 containerd[1444]: 2024-07-02 00:35:59.481 [INFO][4477] k8s.go 615: Releasing IP address(es) ContainerID="f91170ed8c902e281c58992e0605f57f6a6f088fb13f4b3dd2c20e2a999f8215" Jul 2 00:35:59.525915 containerd[1444]: 2024-07-02 00:35:59.481 [INFO][4477] utils.go 188: Calico CNI releasing IP address ContainerID="f91170ed8c902e281c58992e0605f57f6a6f088fb13f4b3dd2c20e2a999f8215" Jul 2 00:35:59.525915 containerd[1444]: 2024-07-02 00:35:59.512 [INFO][4483] ipam_plugin.go 411: Releasing address using handleID ContainerID="f91170ed8c902e281c58992e0605f57f6a6f088fb13f4b3dd2c20e2a999f8215" HandleID="k8s-pod-network.f91170ed8c902e281c58992e0605f57f6a6f088fb13f4b3dd2c20e2a999f8215" Workload="ci--3975--1--1--4--69569a1933.novalocal-k8s-calico--kube--controllers--7548bf8497--6bsv7-eth0" Jul 2 00:35:59.525915 containerd[1444]: 2024-07-02 00:35:59.512 [INFO][4483] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:35:59.525915 containerd[1444]: 2024-07-02 00:35:59.512 [INFO][4483] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:35:59.525915 containerd[1444]: 2024-07-02 00:35:59.519 [WARNING][4483] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="f91170ed8c902e281c58992e0605f57f6a6f088fb13f4b3dd2c20e2a999f8215" HandleID="k8s-pod-network.f91170ed8c902e281c58992e0605f57f6a6f088fb13f4b3dd2c20e2a999f8215" Workload="ci--3975--1--1--4--69569a1933.novalocal-k8s-calico--kube--controllers--7548bf8497--6bsv7-eth0" Jul 2 00:35:59.525915 containerd[1444]: 2024-07-02 00:35:59.520 [INFO][4483] ipam_plugin.go 439: Releasing address using workloadID ContainerID="f91170ed8c902e281c58992e0605f57f6a6f088fb13f4b3dd2c20e2a999f8215" HandleID="k8s-pod-network.f91170ed8c902e281c58992e0605f57f6a6f088fb13f4b3dd2c20e2a999f8215" Workload="ci--3975--1--1--4--69569a1933.novalocal-k8s-calico--kube--controllers--7548bf8497--6bsv7-eth0" Jul 2 00:35:59.525915 containerd[1444]: 2024-07-02 00:35:59.521 [INFO][4483] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:35:59.525915 containerd[1444]: 2024-07-02 00:35:59.523 [INFO][4477] k8s.go 621: Teardown processing complete. ContainerID="f91170ed8c902e281c58992e0605f57f6a6f088fb13f4b3dd2c20e2a999f8215" Jul 2 00:35:59.529685 containerd[1444]: time="2024-07-02T00:35:59.529031490Z" level=info msg="TearDown network for sandbox \"f91170ed8c902e281c58992e0605f57f6a6f088fb13f4b3dd2c20e2a999f8215\" successfully" Jul 2 00:35:59.529685 containerd[1444]: time="2024-07-02T00:35:59.529092279Z" level=info msg="StopPodSandbox for \"f91170ed8c902e281c58992e0605f57f6a6f088fb13f4b3dd2c20e2a999f8215\" returns successfully" Jul 2 00:35:59.530265 containerd[1444]: time="2024-07-02T00:35:59.530045875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7548bf8497-6bsv7,Uid:a34c27ef-25a3-4bac-90f8-f587a5b80a52,Namespace:calico-system,Attempt:1,}" Jul 2 00:35:59.531476 systemd[1]: run-netns-cni\x2d7d3600e9\x2d035e\x2d5244\x2d558f\x2dabd242e4fce8.mount: Deactivated successfully. Jul 2 00:35:59.709522 systemd-networkd[1365]: cali73806a61e03: Link UP Jul 2 00:35:59.709790 systemd-networkd[1365]: cali73806a61e03: Gained carrier Jul 2 00:35:59.727854 containerd[1444]: 2024-07-02 00:35:59.599 [INFO][4494] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975--1--1--4--69569a1933.novalocal-k8s-calico--kube--controllers--7548bf8497--6bsv7-eth0 calico-kube-controllers-7548bf8497- calico-system a34c27ef-25a3-4bac-90f8-f587a5b80a52 863 0 2024-07-02 00:35:04 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7548bf8497 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-3975-1-1-4-69569a1933.novalocal calico-kube-controllers-7548bf8497-6bsv7 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali73806a61e03 [] []}} ContainerID="ef396f47aaf0efca7e5224f227141d4c137af02a8cef6e1a580d3bf6e871e9e9" Namespace="calico-system" Pod="calico-kube-controllers-7548bf8497-6bsv7" WorkloadEndpoint="ci--3975--1--1--4--69569a1933.novalocal-k8s-calico--kube--controllers--7548bf8497--6bsv7-" Jul 2 00:35:59.727854 containerd[1444]: 2024-07-02 00:35:59.599 [INFO][4494] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ef396f47aaf0efca7e5224f227141d4c137af02a8cef6e1a580d3bf6e871e9e9" Namespace="calico-system" Pod="calico-kube-controllers-7548bf8497-6bsv7" WorkloadEndpoint="ci--3975--1--1--4--69569a1933.novalocal-k8s-calico--kube--controllers--7548bf8497--6bsv7-eth0" Jul 2 00:35:59.727854 containerd[1444]: 2024-07-02 00:35:59.633 [INFO][4501] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ef396f47aaf0efca7e5224f227141d4c137af02a8cef6e1a580d3bf6e871e9e9" HandleID="k8s-pod-network.ef396f47aaf0efca7e5224f227141d4c137af02a8cef6e1a580d3bf6e871e9e9" Workload="ci--3975--1--1--4--69569a1933.novalocal-k8s-calico--kube--controllers--7548bf8497--6bsv7-eth0" Jul 2 00:35:59.727854 containerd[1444]: 2024-07-02 00:35:59.650 [INFO][4501] ipam_plugin.go 264: Auto assigning IP ContainerID="ef396f47aaf0efca7e5224f227141d4c137af02a8cef6e1a580d3bf6e871e9e9" HandleID="k8s-pod-network.ef396f47aaf0efca7e5224f227141d4c137af02a8cef6e1a580d3bf6e871e9e9" Workload="ci--3975--1--1--4--69569a1933.novalocal-k8s-calico--kube--controllers--7548bf8497--6bsv7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002012d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3975-1-1-4-69569a1933.novalocal", "pod":"calico-kube-controllers-7548bf8497-6bsv7", "timestamp":"2024-07-02 00:35:59.633001595 +0000 UTC"}, Hostname:"ci-3975-1-1-4-69569a1933.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 00:35:59.727854 containerd[1444]: 2024-07-02 00:35:59.650 [INFO][4501] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:35:59.727854 containerd[1444]: 2024-07-02 00:35:59.651 [INFO][4501] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:35:59.727854 containerd[1444]: 2024-07-02 00:35:59.651 [INFO][4501] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975-1-1-4-69569a1933.novalocal' Jul 2 00:35:59.727854 containerd[1444]: 2024-07-02 00:35:59.655 [INFO][4501] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ef396f47aaf0efca7e5224f227141d4c137af02a8cef6e1a580d3bf6e871e9e9" host="ci-3975-1-1-4-69569a1933.novalocal" Jul 2 00:35:59.727854 containerd[1444]: 2024-07-02 00:35:59.662 [INFO][4501] ipam.go 372: Looking up existing affinities for host host="ci-3975-1-1-4-69569a1933.novalocal" Jul 2 00:35:59.727854 containerd[1444]: 2024-07-02 00:35:59.669 [INFO][4501] ipam.go 489: Trying affinity for 192.168.36.192/26 host="ci-3975-1-1-4-69569a1933.novalocal" Jul 2 00:35:59.727854 containerd[1444]: 2024-07-02 00:35:59.673 [INFO][4501] ipam.go 155: Attempting to load block cidr=192.168.36.192/26 host="ci-3975-1-1-4-69569a1933.novalocal" Jul 2 00:35:59.727854 containerd[1444]: 2024-07-02 00:35:59.680 [INFO][4501] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.36.192/26 host="ci-3975-1-1-4-69569a1933.novalocal" Jul 2 00:35:59.727854 containerd[1444]: 2024-07-02 00:35:59.680 [INFO][4501] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.36.192/26 handle="k8s-pod-network.ef396f47aaf0efca7e5224f227141d4c137af02a8cef6e1a580d3bf6e871e9e9" host="ci-3975-1-1-4-69569a1933.novalocal" Jul 2 00:35:59.727854 containerd[1444]: 2024-07-02 00:35:59.686 [INFO][4501] ipam.go 1685: Creating new handle: k8s-pod-network.ef396f47aaf0efca7e5224f227141d4c137af02a8cef6e1a580d3bf6e871e9e9 Jul 2 00:35:59.727854 containerd[1444]: 2024-07-02 00:35:59.692 [INFO][4501] ipam.go 1203: Writing block in order to claim IPs block=192.168.36.192/26 handle="k8s-pod-network.ef396f47aaf0efca7e5224f227141d4c137af02a8cef6e1a580d3bf6e871e9e9" host="ci-3975-1-1-4-69569a1933.novalocal" Jul 2 00:35:59.727854 containerd[1444]: 2024-07-02 00:35:59.699 [INFO][4501] ipam.go 1216: Successfully claimed IPs: [192.168.36.196/26] block=192.168.36.192/26 handle="k8s-pod-network.ef396f47aaf0efca7e5224f227141d4c137af02a8cef6e1a580d3bf6e871e9e9" host="ci-3975-1-1-4-69569a1933.novalocal" Jul 2 00:35:59.727854 containerd[1444]: 2024-07-02 00:35:59.699 [INFO][4501] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.36.196/26] handle="k8s-pod-network.ef396f47aaf0efca7e5224f227141d4c137af02a8cef6e1a580d3bf6e871e9e9" host="ci-3975-1-1-4-69569a1933.novalocal" Jul 2 00:35:59.727854 containerd[1444]: 2024-07-02 00:35:59.700 [INFO][4501] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:35:59.727854 containerd[1444]: 2024-07-02 00:35:59.700 [INFO][4501] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.36.196/26] IPv6=[] ContainerID="ef396f47aaf0efca7e5224f227141d4c137af02a8cef6e1a580d3bf6e871e9e9" HandleID="k8s-pod-network.ef396f47aaf0efca7e5224f227141d4c137af02a8cef6e1a580d3bf6e871e9e9" Workload="ci--3975--1--1--4--69569a1933.novalocal-k8s-calico--kube--controllers--7548bf8497--6bsv7-eth0" Jul 2 00:35:59.729952 containerd[1444]: 2024-07-02 00:35:59.703 [INFO][4494] k8s.go 386: Populated endpoint ContainerID="ef396f47aaf0efca7e5224f227141d4c137af02a8cef6e1a580d3bf6e871e9e9" Namespace="calico-system" Pod="calico-kube-controllers-7548bf8497-6bsv7" WorkloadEndpoint="ci--3975--1--1--4--69569a1933.novalocal-k8s-calico--kube--controllers--7548bf8497--6bsv7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--1--1--4--69569a1933.novalocal-k8s-calico--kube--controllers--7548bf8497--6bsv7-eth0", GenerateName:"calico-kube-controllers-7548bf8497-", Namespace:"calico-system", SelfLink:"", UID:"a34c27ef-25a3-4bac-90f8-f587a5b80a52", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 35, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7548bf8497", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-1-1-4-69569a1933.novalocal", ContainerID:"", Pod:"calico-kube-controllers-7548bf8497-6bsv7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.36.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali73806a61e03", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:35:59.729952 containerd[1444]: 2024-07-02 00:35:59.703 [INFO][4494] k8s.go 387: Calico CNI using IPs: [192.168.36.196/32] ContainerID="ef396f47aaf0efca7e5224f227141d4c137af02a8cef6e1a580d3bf6e871e9e9" Namespace="calico-system" Pod="calico-kube-controllers-7548bf8497-6bsv7" WorkloadEndpoint="ci--3975--1--1--4--69569a1933.novalocal-k8s-calico--kube--controllers--7548bf8497--6bsv7-eth0" Jul 2 00:35:59.729952 containerd[1444]: 2024-07-02 00:35:59.703 [INFO][4494] dataplane_linux.go 68: Setting the host side veth name to cali73806a61e03 ContainerID="ef396f47aaf0efca7e5224f227141d4c137af02a8cef6e1a580d3bf6e871e9e9" Namespace="calico-system" Pod="calico-kube-controllers-7548bf8497-6bsv7" WorkloadEndpoint="ci--3975--1--1--4--69569a1933.novalocal-k8s-calico--kube--controllers--7548bf8497--6bsv7-eth0" Jul 2 00:35:59.729952 containerd[1444]: 2024-07-02 00:35:59.708 [INFO][4494] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="ef396f47aaf0efca7e5224f227141d4c137af02a8cef6e1a580d3bf6e871e9e9" Namespace="calico-system" Pod="calico-kube-controllers-7548bf8497-6bsv7" WorkloadEndpoint="ci--3975--1--1--4--69569a1933.novalocal-k8s-calico--kube--controllers--7548bf8497--6bsv7-eth0" Jul 2 00:35:59.729952 containerd[1444]: 2024-07-02 00:35:59.709 [INFO][4494] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ef396f47aaf0efca7e5224f227141d4c137af02a8cef6e1a580d3bf6e871e9e9" Namespace="calico-system" Pod="calico-kube-controllers-7548bf8497-6bsv7" WorkloadEndpoint="ci--3975--1--1--4--69569a1933.novalocal-k8s-calico--kube--controllers--7548bf8497--6bsv7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--1--1--4--69569a1933.novalocal-k8s-calico--kube--controllers--7548bf8497--6bsv7-eth0", GenerateName:"calico-kube-controllers-7548bf8497-", Namespace:"calico-system", SelfLink:"", UID:"a34c27ef-25a3-4bac-90f8-f587a5b80a52", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 35, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7548bf8497", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-1-1-4-69569a1933.novalocal", ContainerID:"ef396f47aaf0efca7e5224f227141d4c137af02a8cef6e1a580d3bf6e871e9e9", Pod:"calico-kube-controllers-7548bf8497-6bsv7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.36.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali73806a61e03", MAC:"c2:f4:23:f0:52:59", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:35:59.729952 containerd[1444]: 2024-07-02 00:35:59.724 [INFO][4494] k8s.go 500: Wrote updated endpoint to datastore ContainerID="ef396f47aaf0efca7e5224f227141d4c137af02a8cef6e1a580d3bf6e871e9e9" Namespace="calico-system" Pod="calico-kube-controllers-7548bf8497-6bsv7" WorkloadEndpoint="ci--3975--1--1--4--69569a1933.novalocal-k8s-calico--kube--controllers--7548bf8497--6bsv7-eth0" Jul 2 00:35:59.764419 containerd[1444]: time="2024-07-02T00:35:59.763861550Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:35:59.764419 containerd[1444]: time="2024-07-02T00:35:59.763954268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:35:59.764419 containerd[1444]: time="2024-07-02T00:35:59.763983471Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:35:59.764419 containerd[1444]: time="2024-07-02T00:35:59.764002856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:35:59.792364 systemd[1]: Started cri-containerd-ef396f47aaf0efca7e5224f227141d4c137af02a8cef6e1a580d3bf6e871e9e9.scope - libcontainer container ef396f47aaf0efca7e5224f227141d4c137af02a8cef6e1a580d3bf6e871e9e9. Jul 2 00:35:59.844705 containerd[1444]: time="2024-07-02T00:35:59.844649904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7548bf8497-6bsv7,Uid:a34c27ef-25a3-4bac-90f8-f587a5b80a52,Namespace:calico-system,Attempt:1,} returns sandbox id \"ef396f47aaf0efca7e5224f227141d4c137af02a8cef6e1a580d3bf6e871e9e9\"" Jul 2 00:35:59.870200 sshd[4351]: Accepted publickey for core from 172.24.4.1 port 55750 ssh2: RSA SHA256:PCQlpQPF3MQUBJB7FGXO+NVNbcy5KrkTt8QQEvm9Ado Jul 2 00:35:59.873207 systemd-networkd[1365]: cali932f553213e: Gained IPv6LL Jul 2 00:35:59.874306 sshd[4351]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:35:59.880040 systemd-logind[1432]: New session 12 of user core. Jul 2 00:35:59.888209 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 2 00:36:00.080097 kubelet[2632]: I0702 00:36:00.079473 2632 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-wmfxs" podStartSLOduration=64.079321648 podStartE2EDuration="1m4.079321648s" podCreationTimestamp="2024-07-02 00:34:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:36:00.042243472 +0000 UTC m=+76.925487469" watchObservedRunningTime="2024-07-02 00:36:00.079321648 +0000 UTC m=+76.962565595" Jul 2 00:36:00.643894 containerd[1444]: time="2024-07-02T00:36:00.643838649Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:36:00.645287 containerd[1444]: time="2024-07-02T00:36:00.645250679Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=10147655" Jul 2 00:36:00.646217 containerd[1444]: time="2024-07-02T00:36:00.646192456Z" level=info msg="ImageCreate event name:\"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:36:00.648999 containerd[1444]: time="2024-07-02T00:36:00.648935358Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:36:00.649781 containerd[1444]: time="2024-07-02T00:36:00.649742851Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"11595367\" in 2.275558643s" Jul 2 00:36:00.649971 containerd[1444]: time="2024-07-02T00:36:00.649951820Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\"" Jul 2 00:36:00.655945 containerd[1444]: time="2024-07-02T00:36:00.654470941Z" level=info msg="CreateContainer within sandbox \"d6a6c44db44eecc43ded1cbc7c36325205f99310c206c8caab1bfda17b68b11a\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 2 00:36:00.673911 containerd[1444]: time="2024-07-02T00:36:00.673863050Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"" Jul 2 00:36:00.685999 containerd[1444]: time="2024-07-02T00:36:00.685945981Z" level=info msg="CreateContainer within sandbox \"d6a6c44db44eecc43ded1cbc7c36325205f99310c206c8caab1bfda17b68b11a\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"4721a44a79129c3c3d95e54caa0236559982636d422ad422c1dff0d377cdedf5\"" Jul 2 00:36:00.688161 containerd[1444]: time="2024-07-02T00:36:00.686873683Z" level=info msg="StartContainer for \"4721a44a79129c3c3d95e54caa0236559982636d422ad422c1dff0d377cdedf5\"" Jul 2 00:36:00.709823 sshd[4351]: pam_unix(sshd:session): session closed for user core Jul 2 00:36:00.717133 systemd[1]: sshd@9-172.24.4.39:22-172.24.4.1:55750.service: Deactivated successfully. Jul 2 00:36:00.723364 systemd[1]: session-12.scope: Deactivated successfully. Jul 2 00:36:00.726778 systemd-logind[1432]: Session 12 logged out. Waiting for processes to exit. Jul 2 00:36:00.738131 systemd-logind[1432]: Removed session 12. Jul 2 00:36:00.742480 systemd[1]: Started cri-containerd-4721a44a79129c3c3d95e54caa0236559982636d422ad422c1dff0d377cdedf5.scope - libcontainer container 4721a44a79129c3c3d95e54caa0236559982636d422ad422c1dff0d377cdedf5. Jul 2 00:36:00.782591 containerd[1444]: time="2024-07-02T00:36:00.782172145Z" level=info msg="StartContainer for \"4721a44a79129c3c3d95e54caa0236559982636d422ad422c1dff0d377cdedf5\" returns successfully" Jul 2 00:36:00.833343 systemd-networkd[1365]: cali73806a61e03: Gained IPv6LL Jul 2 00:36:01.006034 kubelet[2632]: I0702 00:36:01.004602 2632 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-9ksc4" podStartSLOduration=52.177875967 podStartE2EDuration="57.004548613s" podCreationTimestamp="2024-07-02 00:35:04 +0000 UTC" firstStartedPulling="2024-07-02 00:35:55.823778188 +0000 UTC m=+72.707022135" lastFinishedPulling="2024-07-02 00:36:00.650450834 +0000 UTC m=+77.533694781" observedRunningTime="2024-07-02 00:36:01.003271484 +0000 UTC m=+77.886515531" watchObservedRunningTime="2024-07-02 00:36:01.004548613 +0000 UTC m=+77.887792560" Jul 2 00:36:01.933305 kubelet[2632]: I0702 00:36:01.933183 2632 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 2 00:36:01.943581 kubelet[2632]: I0702 00:36:01.943492 2632 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 2 00:36:03.796759 containerd[1444]: time="2024-07-02T00:36:03.796520005Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:36:03.798595 containerd[1444]: time="2024-07-02T00:36:03.798505322Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.0: active requests=0, bytes read=33505793" Jul 2 00:36:03.799878 containerd[1444]: time="2024-07-02T00:36:03.799833472Z" level=info msg="ImageCreate event name:\"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:36:03.802357 containerd[1444]: time="2024-07-02T00:36:03.802317298Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:36:03.803942 containerd[1444]: time="2024-07-02T00:36:03.803898949Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" with image id \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\", size \"34953521\" in 3.129985928s" Jul 2 00:36:03.803942 containerd[1444]: time="2024-07-02T00:36:03.803936137Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\"" Jul 2 00:36:03.840618 containerd[1444]: time="2024-07-02T00:36:03.840579327Z" level=info msg="CreateContainer within sandbox \"ef396f47aaf0efca7e5224f227141d4c137af02a8cef6e1a580d3bf6e871e9e9\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 2 00:36:04.098016 containerd[1444]: time="2024-07-02T00:36:04.096711401Z" level=info msg="CreateContainer within sandbox \"ef396f47aaf0efca7e5224f227141d4c137af02a8cef6e1a580d3bf6e871e9e9\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"2b7032c9a949c3829110aea5b3c9c640bc0a30f06d2ae96aa24a36b192f5cfab\"" Jul 2 00:36:04.104394 containerd[1444]: time="2024-07-02T00:36:04.104332719Z" level=info msg="StartContainer for \"2b7032c9a949c3829110aea5b3c9c640bc0a30f06d2ae96aa24a36b192f5cfab\"" Jul 2 00:36:04.179218 systemd[1]: Started cri-containerd-2b7032c9a949c3829110aea5b3c9c640bc0a30f06d2ae96aa24a36b192f5cfab.scope - libcontainer container 2b7032c9a949c3829110aea5b3c9c640bc0a30f06d2ae96aa24a36b192f5cfab. Jul 2 00:36:04.227807 containerd[1444]: time="2024-07-02T00:36:04.227757966Z" level=info msg="StartContainer for \"2b7032c9a949c3829110aea5b3c9c640bc0a30f06d2ae96aa24a36b192f5cfab\" returns successfully" Jul 2 00:36:05.162468 kubelet[2632]: I0702 00:36:05.162399 2632 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7548bf8497-6bsv7" podStartSLOduration=57.204297978 podStartE2EDuration="1m1.162347467s" podCreationTimestamp="2024-07-02 00:35:04 +0000 UTC" firstStartedPulling="2024-07-02 00:35:59.846144778 +0000 UTC m=+76.729388725" lastFinishedPulling="2024-07-02 00:36:03.804194257 +0000 UTC m=+80.687438214" observedRunningTime="2024-07-02 00:36:05.161806288 +0000 UTC m=+82.045050235" watchObservedRunningTime="2024-07-02 00:36:05.162347467 +0000 UTC m=+82.045591414" Jul 2 00:36:05.731610 systemd[1]: Started sshd@10-172.24.4.39:22-172.24.4.1:45454.service - OpenSSH per-connection server daemon (172.24.4.1:45454). Jul 2 00:36:07.237854 sshd[4703]: Accepted publickey for core from 172.24.4.1 port 45454 ssh2: RSA SHA256:PCQlpQPF3MQUBJB7FGXO+NVNbcy5KrkTt8QQEvm9Ado Jul 2 00:36:07.242148 sshd[4703]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:36:07.251495 systemd-logind[1432]: New session 13 of user core. Jul 2 00:36:07.259341 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 2 00:36:08.908768 sshd[4703]: pam_unix(sshd:session): session closed for user core Jul 2 00:36:08.922268 systemd[1]: sshd@10-172.24.4.39:22-172.24.4.1:45454.service: Deactivated successfully. Jul 2 00:36:08.930762 systemd[1]: session-13.scope: Deactivated successfully. Jul 2 00:36:08.932141 systemd-logind[1432]: Session 13 logged out. Waiting for processes to exit. Jul 2 00:36:08.933492 systemd-logind[1432]: Removed session 13. Jul 2 00:36:13.932736 systemd[1]: Started sshd@11-172.24.4.39:22-172.24.4.1:45462.service - OpenSSH per-connection server daemon (172.24.4.1:45462). Jul 2 00:36:15.514749 sshd[4755]: Accepted publickey for core from 172.24.4.1 port 45462 ssh2: RSA SHA256:PCQlpQPF3MQUBJB7FGXO+NVNbcy5KrkTt8QQEvm9Ado Jul 2 00:36:15.517986 sshd[4755]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:36:15.530675 systemd-logind[1432]: New session 14 of user core. Jul 2 00:36:15.538473 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 2 00:36:16.483599 systemd[1]: Started sshd@12-172.24.4.39:22-172.24.4.1:33190.service - OpenSSH per-connection server daemon (172.24.4.1:33190). Jul 2 00:36:16.600917 sshd[4755]: pam_unix(sshd:session): session closed for user core Jul 2 00:36:16.607011 systemd[1]: sshd@11-172.24.4.39:22-172.24.4.1:45462.service: Deactivated successfully. Jul 2 00:36:16.612049 systemd[1]: session-14.scope: Deactivated successfully. Jul 2 00:36:16.616890 systemd-logind[1432]: Session 14 logged out. Waiting for processes to exit. Jul 2 00:36:16.619538 systemd-logind[1432]: Removed session 14. Jul 2 00:36:17.964462 sshd[4768]: Accepted publickey for core from 172.24.4.1 port 33190 ssh2: RSA SHA256:PCQlpQPF3MQUBJB7FGXO+NVNbcy5KrkTt8QQEvm9Ado Jul 2 00:36:17.968793 sshd[4768]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:36:17.982243 systemd-logind[1432]: New session 15 of user core. Jul 2 00:36:17.987413 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 2 00:36:19.141751 sshd[4768]: pam_unix(sshd:session): session closed for user core Jul 2 00:36:19.161674 systemd[1]: Started sshd@13-172.24.4.39:22-172.24.4.1:33206.service - OpenSSH per-connection server daemon (172.24.4.1:33206). Jul 2 00:36:19.202277 systemd[1]: sshd@12-172.24.4.39:22-172.24.4.1:33190.service: Deactivated successfully. Jul 2 00:36:19.206700 systemd[1]: session-15.scope: Deactivated successfully. Jul 2 00:36:19.214500 systemd-logind[1432]: Session 15 logged out. Waiting for processes to exit. Jul 2 00:36:19.218475 systemd-logind[1432]: Removed session 15. Jul 2 00:36:20.686908 sshd[4779]: Accepted publickey for core from 172.24.4.1 port 33206 ssh2: RSA SHA256:PCQlpQPF3MQUBJB7FGXO+NVNbcy5KrkTt8QQEvm9Ado Jul 2 00:36:20.746404 sshd[4779]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:36:20.762134 systemd-logind[1432]: New session 16 of user core. Jul 2 00:36:20.767414 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 2 00:36:21.414508 sshd[4779]: pam_unix(sshd:session): session closed for user core Jul 2 00:36:21.419261 systemd[1]: sshd@13-172.24.4.39:22-172.24.4.1:33206.service: Deactivated successfully. Jul 2 00:36:21.421345 systemd[1]: session-16.scope: Deactivated successfully. Jul 2 00:36:21.422688 systemd-logind[1432]: Session 16 logged out. Waiting for processes to exit. Jul 2 00:36:21.424198 systemd-logind[1432]: Removed session 16. Jul 2 00:36:26.436874 systemd[1]: Started sshd@14-172.24.4.39:22-172.24.4.1:44586.service - OpenSSH per-connection server daemon (172.24.4.1:44586). Jul 2 00:36:27.666746 sshd[4824]: Accepted publickey for core from 172.24.4.1 port 44586 ssh2: RSA SHA256:PCQlpQPF3MQUBJB7FGXO+NVNbcy5KrkTt8QQEvm9Ado Jul 2 00:36:27.671891 sshd[4824]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:36:27.686228 systemd-logind[1432]: New session 17 of user core. Jul 2 00:36:27.691489 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 2 00:36:28.539801 sshd[4824]: pam_unix(sshd:session): session closed for user core Jul 2 00:36:28.551756 systemd-logind[1432]: Session 17 logged out. Waiting for processes to exit. Jul 2 00:36:28.552615 systemd[1]: sshd@14-172.24.4.39:22-172.24.4.1:44586.service: Deactivated successfully. Jul 2 00:36:28.556623 systemd[1]: session-17.scope: Deactivated successfully. Jul 2 00:36:28.561882 systemd-logind[1432]: Removed session 17. Jul 2 00:36:33.560717 systemd[1]: Started sshd@15-172.24.4.39:22-172.24.4.1:44598.service - OpenSSH per-connection server daemon (172.24.4.1:44598). Jul 2 00:36:34.899855 sshd[4851]: Accepted publickey for core from 172.24.4.1 port 44598 ssh2: RSA SHA256:PCQlpQPF3MQUBJB7FGXO+NVNbcy5KrkTt8QQEvm9Ado Jul 2 00:36:34.902973 sshd[4851]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:36:34.911344 systemd-logind[1432]: New session 18 of user core. Jul 2 00:36:34.920803 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 2 00:36:35.704675 sshd[4851]: pam_unix(sshd:session): session closed for user core Jul 2 00:36:35.714283 systemd[1]: sshd@15-172.24.4.39:22-172.24.4.1:44598.service: Deactivated successfully. Jul 2 00:36:35.718816 systemd[1]: session-18.scope: Deactivated successfully. Jul 2 00:36:35.723962 systemd-logind[1432]: Session 18 logged out. Waiting for processes to exit. Jul 2 00:36:35.725876 systemd-logind[1432]: Removed session 18. Jul 2 00:36:40.732702 systemd[1]: Started sshd@16-172.24.4.39:22-172.24.4.1:38418.service - OpenSSH per-connection server daemon (172.24.4.1:38418). Jul 2 00:36:42.221138 sshd[4889]: Accepted publickey for core from 172.24.4.1 port 38418 ssh2: RSA SHA256:PCQlpQPF3MQUBJB7FGXO+NVNbcy5KrkTt8QQEvm9Ado Jul 2 00:36:42.223167 sshd[4889]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:36:42.231520 systemd-logind[1432]: New session 19 of user core. Jul 2 00:36:42.237233 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 2 00:36:43.026571 sshd[4889]: pam_unix(sshd:session): session closed for user core Jul 2 00:36:43.038271 systemd[1]: sshd@16-172.24.4.39:22-172.24.4.1:38418.service: Deactivated successfully. Jul 2 00:36:43.041503 systemd[1]: session-19.scope: Deactivated successfully. Jul 2 00:36:43.046414 systemd-logind[1432]: Session 19 logged out. Waiting for processes to exit. Jul 2 00:36:43.055722 systemd[1]: Started sshd@17-172.24.4.39:22-172.24.4.1:38420.service - OpenSSH per-connection server daemon (172.24.4.1:38420). Jul 2 00:36:43.059803 systemd-logind[1432]: Removed session 19. Jul 2 00:36:43.455425 containerd[1444]: time="2024-07-02T00:36:43.455344548Z" level=info msg="StopPodSandbox for \"9282b1be0a3390b817103ba21e6911c3040d3eb0c0cbc2c471f8e7d110cfd3c6\"" Jul 2 00:36:43.615720 containerd[1444]: 2024-07-02 00:36:43.546 [WARNING][4924] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9282b1be0a3390b817103ba21e6911c3040d3eb0c0cbc2c471f8e7d110cfd3c6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--1--1--4--69569a1933.novalocal-k8s-coredns--76f75df574--wmfxs-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"4aa0f677-0725-46c4-8993-0c9903cb9cb0", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 34, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-1-1-4-69569a1933.novalocal", ContainerID:"5ed7c2236068338d00891e37f6eff965c4dc862c17ad600cbea525f6b6627486", Pod:"coredns-76f75df574-wmfxs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.36.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali932f553213e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:36:43.615720 containerd[1444]: 2024-07-02 00:36:43.546 [INFO][4924] k8s.go 608: Cleaning up netns ContainerID="9282b1be0a3390b817103ba21e6911c3040d3eb0c0cbc2c471f8e7d110cfd3c6" Jul 2 00:36:43.615720 containerd[1444]: 2024-07-02 00:36:43.546 [INFO][4924] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="9282b1be0a3390b817103ba21e6911c3040d3eb0c0cbc2c471f8e7d110cfd3c6" iface="eth0" netns="" Jul 2 00:36:43.615720 containerd[1444]: 2024-07-02 00:36:43.546 [INFO][4924] k8s.go 615: Releasing IP address(es) ContainerID="9282b1be0a3390b817103ba21e6911c3040d3eb0c0cbc2c471f8e7d110cfd3c6" Jul 2 00:36:43.615720 containerd[1444]: 2024-07-02 00:36:43.546 [INFO][4924] utils.go 188: Calico CNI releasing IP address ContainerID="9282b1be0a3390b817103ba21e6911c3040d3eb0c0cbc2c471f8e7d110cfd3c6" Jul 2 00:36:43.615720 containerd[1444]: 2024-07-02 00:36:43.595 [INFO][4931] ipam_plugin.go 411: Releasing address using handleID ContainerID="9282b1be0a3390b817103ba21e6911c3040d3eb0c0cbc2c471f8e7d110cfd3c6" HandleID="k8s-pod-network.9282b1be0a3390b817103ba21e6911c3040d3eb0c0cbc2c471f8e7d110cfd3c6" Workload="ci--3975--1--1--4--69569a1933.novalocal-k8s-coredns--76f75df574--wmfxs-eth0" Jul 2 00:36:43.615720 containerd[1444]: 2024-07-02 00:36:43.596 [INFO][4931] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:36:43.615720 containerd[1444]: 2024-07-02 00:36:43.596 [INFO][4931] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:36:43.615720 containerd[1444]: 2024-07-02 00:36:43.609 [WARNING][4931] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="9282b1be0a3390b817103ba21e6911c3040d3eb0c0cbc2c471f8e7d110cfd3c6" HandleID="k8s-pod-network.9282b1be0a3390b817103ba21e6911c3040d3eb0c0cbc2c471f8e7d110cfd3c6" Workload="ci--3975--1--1--4--69569a1933.novalocal-k8s-coredns--76f75df574--wmfxs-eth0" Jul 2 00:36:43.615720 containerd[1444]: 2024-07-02 00:36:43.609 [INFO][4931] ipam_plugin.go 439: Releasing address using workloadID ContainerID="9282b1be0a3390b817103ba21e6911c3040d3eb0c0cbc2c471f8e7d110cfd3c6" HandleID="k8s-pod-network.9282b1be0a3390b817103ba21e6911c3040d3eb0c0cbc2c471f8e7d110cfd3c6" Workload="ci--3975--1--1--4--69569a1933.novalocal-k8s-coredns--76f75df574--wmfxs-eth0" Jul 2 00:36:43.615720 containerd[1444]: 2024-07-02 00:36:43.611 [INFO][4931] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:36:43.615720 containerd[1444]: 2024-07-02 00:36:43.613 [INFO][4924] k8s.go 621: Teardown processing complete. ContainerID="9282b1be0a3390b817103ba21e6911c3040d3eb0c0cbc2c471f8e7d110cfd3c6" Jul 2 00:36:43.616632 containerd[1444]: time="2024-07-02T00:36:43.616446175Z" level=info msg="TearDown network for sandbox \"9282b1be0a3390b817103ba21e6911c3040d3eb0c0cbc2c471f8e7d110cfd3c6\" successfully" Jul 2 00:36:43.616632 containerd[1444]: time="2024-07-02T00:36:43.616493776Z" level=info msg="StopPodSandbox for \"9282b1be0a3390b817103ba21e6911c3040d3eb0c0cbc2c471f8e7d110cfd3c6\" returns successfully" Jul 2 00:36:43.621250 containerd[1444]: time="2024-07-02T00:36:43.621177103Z" level=info msg="RemovePodSandbox for \"9282b1be0a3390b817103ba21e6911c3040d3eb0c0cbc2c471f8e7d110cfd3c6\"" Jul 2 00:36:43.621250 containerd[1444]: time="2024-07-02T00:36:43.621216337Z" level=info msg="Forcibly stopping sandbox \"9282b1be0a3390b817103ba21e6911c3040d3eb0c0cbc2c471f8e7d110cfd3c6\"" Jul 2 00:36:43.728713 containerd[1444]: 2024-07-02 00:36:43.678 [WARNING][4949] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9282b1be0a3390b817103ba21e6911c3040d3eb0c0cbc2c471f8e7d110cfd3c6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--1--1--4--69569a1933.novalocal-k8s-coredns--76f75df574--wmfxs-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"4aa0f677-0725-46c4-8993-0c9903cb9cb0", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 34, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-1-1-4-69569a1933.novalocal", ContainerID:"5ed7c2236068338d00891e37f6eff965c4dc862c17ad600cbea525f6b6627486", Pod:"coredns-76f75df574-wmfxs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.36.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali932f553213e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:36:43.728713 containerd[1444]: 2024-07-02 00:36:43.678 [INFO][4949] k8s.go 608: Cleaning up netns ContainerID="9282b1be0a3390b817103ba21e6911c3040d3eb0c0cbc2c471f8e7d110cfd3c6" Jul 2 00:36:43.728713 containerd[1444]: 2024-07-02 00:36:43.678 [INFO][4949] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="9282b1be0a3390b817103ba21e6911c3040d3eb0c0cbc2c471f8e7d110cfd3c6" iface="eth0" netns="" Jul 2 00:36:43.728713 containerd[1444]: 2024-07-02 00:36:43.678 [INFO][4949] k8s.go 615: Releasing IP address(es) ContainerID="9282b1be0a3390b817103ba21e6911c3040d3eb0c0cbc2c471f8e7d110cfd3c6" Jul 2 00:36:43.728713 containerd[1444]: 2024-07-02 00:36:43.678 [INFO][4949] utils.go 188: Calico CNI releasing IP address ContainerID="9282b1be0a3390b817103ba21e6911c3040d3eb0c0cbc2c471f8e7d110cfd3c6" Jul 2 00:36:43.728713 containerd[1444]: 2024-07-02 00:36:43.705 [INFO][4955] ipam_plugin.go 411: Releasing address using handleID ContainerID="9282b1be0a3390b817103ba21e6911c3040d3eb0c0cbc2c471f8e7d110cfd3c6" HandleID="k8s-pod-network.9282b1be0a3390b817103ba21e6911c3040d3eb0c0cbc2c471f8e7d110cfd3c6" Workload="ci--3975--1--1--4--69569a1933.novalocal-k8s-coredns--76f75df574--wmfxs-eth0" Jul 2 00:36:43.728713 containerd[1444]: 2024-07-02 00:36:43.705 [INFO][4955] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:36:43.728713 containerd[1444]: 2024-07-02 00:36:43.706 [INFO][4955] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:36:43.728713 containerd[1444]: 2024-07-02 00:36:43.718 [WARNING][4955] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="9282b1be0a3390b817103ba21e6911c3040d3eb0c0cbc2c471f8e7d110cfd3c6" HandleID="k8s-pod-network.9282b1be0a3390b817103ba21e6911c3040d3eb0c0cbc2c471f8e7d110cfd3c6" Workload="ci--3975--1--1--4--69569a1933.novalocal-k8s-coredns--76f75df574--wmfxs-eth0" Jul 2 00:36:43.728713 containerd[1444]: 2024-07-02 00:36:43.719 [INFO][4955] ipam_plugin.go 439: Releasing address using workloadID ContainerID="9282b1be0a3390b817103ba21e6911c3040d3eb0c0cbc2c471f8e7d110cfd3c6" HandleID="k8s-pod-network.9282b1be0a3390b817103ba21e6911c3040d3eb0c0cbc2c471f8e7d110cfd3c6" Workload="ci--3975--1--1--4--69569a1933.novalocal-k8s-coredns--76f75df574--wmfxs-eth0" Jul 2 00:36:43.728713 containerd[1444]: 2024-07-02 00:36:43.723 [INFO][4955] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:36:43.728713 containerd[1444]: 2024-07-02 00:36:43.725 [INFO][4949] k8s.go 621: Teardown processing complete. ContainerID="9282b1be0a3390b817103ba21e6911c3040d3eb0c0cbc2c471f8e7d110cfd3c6" Jul 2 00:36:43.728713 containerd[1444]: time="2024-07-02T00:36:43.727842521Z" level=info msg="TearDown network for sandbox \"9282b1be0a3390b817103ba21e6911c3040d3eb0c0cbc2c471f8e7d110cfd3c6\" successfully" Jul 2 00:36:43.739910 containerd[1444]: time="2024-07-02T00:36:43.739838994Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9282b1be0a3390b817103ba21e6911c3040d3eb0c0cbc2c471f8e7d110cfd3c6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 00:36:43.740006 containerd[1444]: time="2024-07-02T00:36:43.739974671Z" level=info msg="RemovePodSandbox \"9282b1be0a3390b817103ba21e6911c3040d3eb0c0cbc2c471f8e7d110cfd3c6\" returns successfully" Jul 2 00:36:43.740832 containerd[1444]: time="2024-07-02T00:36:43.740791985Z" level=info msg="StopPodSandbox for \"1a0b01236b28779688961a6e5a2e4fd44ee4e181d93e344124e3c88260cd168d\"" Jul 2 00:36:43.836615 containerd[1444]: 2024-07-02 00:36:43.785 [WARNING][4974] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1a0b01236b28779688961a6e5a2e4fd44ee4e181d93e344124e3c88260cd168d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--1--1--4--69569a1933.novalocal-k8s-coredns--76f75df574--q8tqz-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"f76e7cb0-2dbd-4d82-8219-6278834c7267", ResourceVersion:"787", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 34, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-1-1-4-69569a1933.novalocal", ContainerID:"7f8ea84f3fb9099b7a2d79ae07206ae1857f9e84bb6162883874c109b94d0e63", Pod:"coredns-76f75df574-q8tqz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.36.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9f001cf372d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:36:43.836615 containerd[1444]: 2024-07-02 00:36:43.785 [INFO][4974] k8s.go 608: Cleaning up netns ContainerID="1a0b01236b28779688961a6e5a2e4fd44ee4e181d93e344124e3c88260cd168d" Jul 2 00:36:43.836615 containerd[1444]: 2024-07-02 00:36:43.785 [INFO][4974] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="1a0b01236b28779688961a6e5a2e4fd44ee4e181d93e344124e3c88260cd168d" iface="eth0" netns="" Jul 2 00:36:43.836615 containerd[1444]: 2024-07-02 00:36:43.785 [INFO][4974] k8s.go 615: Releasing IP address(es) ContainerID="1a0b01236b28779688961a6e5a2e4fd44ee4e181d93e344124e3c88260cd168d" Jul 2 00:36:43.836615 containerd[1444]: 2024-07-02 00:36:43.785 [INFO][4974] utils.go 188: Calico CNI releasing IP address ContainerID="1a0b01236b28779688961a6e5a2e4fd44ee4e181d93e344124e3c88260cd168d" Jul 2 00:36:43.836615 containerd[1444]: 2024-07-02 00:36:43.813 [INFO][4980] ipam_plugin.go 411: Releasing address using handleID ContainerID="1a0b01236b28779688961a6e5a2e4fd44ee4e181d93e344124e3c88260cd168d" HandleID="k8s-pod-network.1a0b01236b28779688961a6e5a2e4fd44ee4e181d93e344124e3c88260cd168d" Workload="ci--3975--1--1--4--69569a1933.novalocal-k8s-coredns--76f75df574--q8tqz-eth0" Jul 2 00:36:43.836615 containerd[1444]: 2024-07-02 00:36:43.814 [INFO][4980] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:36:43.836615 containerd[1444]: 2024-07-02 00:36:43.814 [INFO][4980] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:36:43.836615 containerd[1444]: 2024-07-02 00:36:43.827 [WARNING][4980] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="1a0b01236b28779688961a6e5a2e4fd44ee4e181d93e344124e3c88260cd168d" HandleID="k8s-pod-network.1a0b01236b28779688961a6e5a2e4fd44ee4e181d93e344124e3c88260cd168d" Workload="ci--3975--1--1--4--69569a1933.novalocal-k8s-coredns--76f75df574--q8tqz-eth0" Jul 2 00:36:43.836615 containerd[1444]: 2024-07-02 00:36:43.827 [INFO][4980] ipam_plugin.go 439: Releasing address using workloadID ContainerID="1a0b01236b28779688961a6e5a2e4fd44ee4e181d93e344124e3c88260cd168d" HandleID="k8s-pod-network.1a0b01236b28779688961a6e5a2e4fd44ee4e181d93e344124e3c88260cd168d" Workload="ci--3975--1--1--4--69569a1933.novalocal-k8s-coredns--76f75df574--q8tqz-eth0" Jul 2 00:36:43.836615 containerd[1444]: 2024-07-02 00:36:43.832 [INFO][4980] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:36:43.836615 containerd[1444]: 2024-07-02 00:36:43.834 [INFO][4974] k8s.go 621: Teardown processing complete. ContainerID="1a0b01236b28779688961a6e5a2e4fd44ee4e181d93e344124e3c88260cd168d" Jul 2 00:36:43.837311 containerd[1444]: time="2024-07-02T00:36:43.836664974Z" level=info msg="TearDown network for sandbox \"1a0b01236b28779688961a6e5a2e4fd44ee4e181d93e344124e3c88260cd168d\" successfully" Jul 2 00:36:43.837311 containerd[1444]: time="2024-07-02T00:36:43.836709368Z" level=info msg="StopPodSandbox for \"1a0b01236b28779688961a6e5a2e4fd44ee4e181d93e344124e3c88260cd168d\" returns successfully" Jul 2 00:36:43.838159 containerd[1444]: time="2024-07-02T00:36:43.838032377Z" level=info msg="RemovePodSandbox for \"1a0b01236b28779688961a6e5a2e4fd44ee4e181d93e344124e3c88260cd168d\"" Jul 2 00:36:43.838636 containerd[1444]: time="2024-07-02T00:36:43.838591604Z" level=info msg="Forcibly stopping sandbox \"1a0b01236b28779688961a6e5a2e4fd44ee4e181d93e344124e3c88260cd168d\"" Jul 2 00:36:43.935416 containerd[1444]: 2024-07-02 00:36:43.892 [WARNING][4998] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1a0b01236b28779688961a6e5a2e4fd44ee4e181d93e344124e3c88260cd168d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--1--1--4--69569a1933.novalocal-k8s-coredns--76f75df574--q8tqz-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"f76e7cb0-2dbd-4d82-8219-6278834c7267", ResourceVersion:"787", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 34, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-1-1-4-69569a1933.novalocal", ContainerID:"7f8ea84f3fb9099b7a2d79ae07206ae1857f9e84bb6162883874c109b94d0e63", Pod:"coredns-76f75df574-q8tqz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.36.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9f001cf372d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:36:43.935416 containerd[1444]: 2024-07-02 00:36:43.892 [INFO][4998] k8s.go 608: Cleaning up netns ContainerID="1a0b01236b28779688961a6e5a2e4fd44ee4e181d93e344124e3c88260cd168d" Jul 2 00:36:43.935416 containerd[1444]: 2024-07-02 00:36:43.892 [INFO][4998] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="1a0b01236b28779688961a6e5a2e4fd44ee4e181d93e344124e3c88260cd168d" iface="eth0" netns="" Jul 2 00:36:43.935416 containerd[1444]: 2024-07-02 00:36:43.892 [INFO][4998] k8s.go 615: Releasing IP address(es) ContainerID="1a0b01236b28779688961a6e5a2e4fd44ee4e181d93e344124e3c88260cd168d" Jul 2 00:36:43.935416 containerd[1444]: 2024-07-02 00:36:43.892 [INFO][4998] utils.go 188: Calico CNI releasing IP address ContainerID="1a0b01236b28779688961a6e5a2e4fd44ee4e181d93e344124e3c88260cd168d" Jul 2 00:36:43.935416 containerd[1444]: 2024-07-02 00:36:43.921 [INFO][5004] ipam_plugin.go 411: Releasing address using handleID ContainerID="1a0b01236b28779688961a6e5a2e4fd44ee4e181d93e344124e3c88260cd168d" HandleID="k8s-pod-network.1a0b01236b28779688961a6e5a2e4fd44ee4e181d93e344124e3c88260cd168d" Workload="ci--3975--1--1--4--69569a1933.novalocal-k8s-coredns--76f75df574--q8tqz-eth0" Jul 2 00:36:43.935416 containerd[1444]: 2024-07-02 00:36:43.921 [INFO][5004] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:36:43.935416 containerd[1444]: 2024-07-02 00:36:43.921 [INFO][5004] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:36:43.935416 containerd[1444]: 2024-07-02 00:36:43.929 [WARNING][5004] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="1a0b01236b28779688961a6e5a2e4fd44ee4e181d93e344124e3c88260cd168d" HandleID="k8s-pod-network.1a0b01236b28779688961a6e5a2e4fd44ee4e181d93e344124e3c88260cd168d" Workload="ci--3975--1--1--4--69569a1933.novalocal-k8s-coredns--76f75df574--q8tqz-eth0" Jul 2 00:36:43.935416 containerd[1444]: 2024-07-02 00:36:43.929 [INFO][5004] ipam_plugin.go 439: Releasing address using workloadID ContainerID="1a0b01236b28779688961a6e5a2e4fd44ee4e181d93e344124e3c88260cd168d" HandleID="k8s-pod-network.1a0b01236b28779688961a6e5a2e4fd44ee4e181d93e344124e3c88260cd168d" Workload="ci--3975--1--1--4--69569a1933.novalocal-k8s-coredns--76f75df574--q8tqz-eth0" Jul 2 00:36:43.935416 containerd[1444]: 2024-07-02 00:36:43.931 [INFO][5004] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:36:43.935416 containerd[1444]: 2024-07-02 00:36:43.933 [INFO][4998] k8s.go 621: Teardown processing complete. ContainerID="1a0b01236b28779688961a6e5a2e4fd44ee4e181d93e344124e3c88260cd168d" Jul 2 00:36:43.936110 containerd[1444]: time="2024-07-02T00:36:43.935499509Z" level=info msg="TearDown network for sandbox \"1a0b01236b28779688961a6e5a2e4fd44ee4e181d93e344124e3c88260cd168d\" successfully" Jul 2 00:36:43.947864 containerd[1444]: time="2024-07-02T00:36:43.947801899Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1a0b01236b28779688961a6e5a2e4fd44ee4e181d93e344124e3c88260cd168d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 00:36:43.948021 containerd[1444]: time="2024-07-02T00:36:43.947969116Z" level=info msg="RemovePodSandbox \"1a0b01236b28779688961a6e5a2e4fd44ee4e181d93e344124e3c88260cd168d\" returns successfully" Jul 2 00:36:43.948913 containerd[1444]: time="2024-07-02T00:36:43.948874726Z" level=info msg="StopPodSandbox for \"f91170ed8c902e281c58992e0605f57f6a6f088fb13f4b3dd2c20e2a999f8215\"" Jul 2 00:36:44.031336 containerd[1444]: 2024-07-02 00:36:43.992 [WARNING][5022] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f91170ed8c902e281c58992e0605f57f6a6f088fb13f4b3dd2c20e2a999f8215" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--1--1--4--69569a1933.novalocal-k8s-calico--kube--controllers--7548bf8497--6bsv7-eth0", GenerateName:"calico-kube-controllers-7548bf8497-", Namespace:"calico-system", SelfLink:"", UID:"a34c27ef-25a3-4bac-90f8-f587a5b80a52", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 35, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7548bf8497", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-1-1-4-69569a1933.novalocal", ContainerID:"ef396f47aaf0efca7e5224f227141d4c137af02a8cef6e1a580d3bf6e871e9e9", Pod:"calico-kube-controllers-7548bf8497-6bsv7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.36.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali73806a61e03", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:36:44.031336 containerd[1444]: 2024-07-02 00:36:43.992 [INFO][5022] k8s.go 608: Cleaning up netns ContainerID="f91170ed8c902e281c58992e0605f57f6a6f088fb13f4b3dd2c20e2a999f8215" Jul 2 00:36:44.031336 containerd[1444]: 2024-07-02 00:36:43.992 [INFO][5022] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="f91170ed8c902e281c58992e0605f57f6a6f088fb13f4b3dd2c20e2a999f8215" iface="eth0" netns="" Jul 2 00:36:44.031336 containerd[1444]: 2024-07-02 00:36:43.992 [INFO][5022] k8s.go 615: Releasing IP address(es) ContainerID="f91170ed8c902e281c58992e0605f57f6a6f088fb13f4b3dd2c20e2a999f8215" Jul 2 00:36:44.031336 containerd[1444]: 2024-07-02 00:36:43.992 [INFO][5022] utils.go 188: Calico CNI releasing IP address ContainerID="f91170ed8c902e281c58992e0605f57f6a6f088fb13f4b3dd2c20e2a999f8215" Jul 2 00:36:44.031336 containerd[1444]: 2024-07-02 00:36:44.017 [INFO][5028] ipam_plugin.go 411: Releasing address using handleID ContainerID="f91170ed8c902e281c58992e0605f57f6a6f088fb13f4b3dd2c20e2a999f8215" HandleID="k8s-pod-network.f91170ed8c902e281c58992e0605f57f6a6f088fb13f4b3dd2c20e2a999f8215" Workload="ci--3975--1--1--4--69569a1933.novalocal-k8s-calico--kube--controllers--7548bf8497--6bsv7-eth0" Jul 2 00:36:44.031336 containerd[1444]: 2024-07-02 00:36:44.017 [INFO][5028] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:36:44.031336 containerd[1444]: 2024-07-02 00:36:44.017 [INFO][5028] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:36:44.031336 containerd[1444]: 2024-07-02 00:36:44.025 [WARNING][5028] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="f91170ed8c902e281c58992e0605f57f6a6f088fb13f4b3dd2c20e2a999f8215" HandleID="k8s-pod-network.f91170ed8c902e281c58992e0605f57f6a6f088fb13f4b3dd2c20e2a999f8215" Workload="ci--3975--1--1--4--69569a1933.novalocal-k8s-calico--kube--controllers--7548bf8497--6bsv7-eth0" Jul 2 00:36:44.031336 containerd[1444]: 2024-07-02 00:36:44.025 [INFO][5028] ipam_plugin.go 439: Releasing address using workloadID ContainerID="f91170ed8c902e281c58992e0605f57f6a6f088fb13f4b3dd2c20e2a999f8215" HandleID="k8s-pod-network.f91170ed8c902e281c58992e0605f57f6a6f088fb13f4b3dd2c20e2a999f8215" Workload="ci--3975--1--1--4--69569a1933.novalocal-k8s-calico--kube--controllers--7548bf8497--6bsv7-eth0" Jul 2 00:36:44.031336 containerd[1444]: 2024-07-02 00:36:44.027 [INFO][5028] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:36:44.031336 containerd[1444]: 2024-07-02 00:36:44.029 [INFO][5022] k8s.go 621: Teardown processing complete. ContainerID="f91170ed8c902e281c58992e0605f57f6a6f088fb13f4b3dd2c20e2a999f8215" Jul 2 00:36:44.031336 containerd[1444]: time="2024-07-02T00:36:44.030993388Z" level=info msg="TearDown network for sandbox \"f91170ed8c902e281c58992e0605f57f6a6f088fb13f4b3dd2c20e2a999f8215\" successfully" Jul 2 00:36:44.031336 containerd[1444]: time="2024-07-02T00:36:44.031024436Z" level=info msg="StopPodSandbox for \"f91170ed8c902e281c58992e0605f57f6a6f088fb13f4b3dd2c20e2a999f8215\" returns successfully" Jul 2 00:36:44.033882 containerd[1444]: time="2024-07-02T00:36:44.033536143Z" level=info msg="RemovePodSandbox for \"f91170ed8c902e281c58992e0605f57f6a6f088fb13f4b3dd2c20e2a999f8215\"" Jul 2 00:36:44.033882 containerd[1444]: time="2024-07-02T00:36:44.033569707Z" level=info msg="Forcibly stopping sandbox \"f91170ed8c902e281c58992e0605f57f6a6f088fb13f4b3dd2c20e2a999f8215\"" Jul 2 00:36:44.132160 containerd[1444]: 2024-07-02 00:36:44.082 [WARNING][5046] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f91170ed8c902e281c58992e0605f57f6a6f088fb13f4b3dd2c20e2a999f8215" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--1--1--4--69569a1933.novalocal-k8s-calico--kube--controllers--7548bf8497--6bsv7-eth0", GenerateName:"calico-kube-controllers-7548bf8497-", Namespace:"calico-system", SelfLink:"", UID:"a34c27ef-25a3-4bac-90f8-f587a5b80a52", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 35, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7548bf8497", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-1-1-4-69569a1933.novalocal", ContainerID:"ef396f47aaf0efca7e5224f227141d4c137af02a8cef6e1a580d3bf6e871e9e9", Pod:"calico-kube-controllers-7548bf8497-6bsv7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.36.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali73806a61e03", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:36:44.132160 containerd[1444]: 2024-07-02 00:36:44.083 [INFO][5046] k8s.go 608: Cleaning up netns ContainerID="f91170ed8c902e281c58992e0605f57f6a6f088fb13f4b3dd2c20e2a999f8215" Jul 2 00:36:44.132160 containerd[1444]: 2024-07-02 00:36:44.084 [INFO][5046] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="f91170ed8c902e281c58992e0605f57f6a6f088fb13f4b3dd2c20e2a999f8215" iface="eth0" netns="" Jul 2 00:36:44.132160 containerd[1444]: 2024-07-02 00:36:44.085 [INFO][5046] k8s.go 615: Releasing IP address(es) ContainerID="f91170ed8c902e281c58992e0605f57f6a6f088fb13f4b3dd2c20e2a999f8215" Jul 2 00:36:44.132160 containerd[1444]: 2024-07-02 00:36:44.085 [INFO][5046] utils.go 188: Calico CNI releasing IP address ContainerID="f91170ed8c902e281c58992e0605f57f6a6f088fb13f4b3dd2c20e2a999f8215" Jul 2 00:36:44.132160 containerd[1444]: 2024-07-02 00:36:44.117 [INFO][5052] ipam_plugin.go 411: Releasing address using handleID ContainerID="f91170ed8c902e281c58992e0605f57f6a6f088fb13f4b3dd2c20e2a999f8215" HandleID="k8s-pod-network.f91170ed8c902e281c58992e0605f57f6a6f088fb13f4b3dd2c20e2a999f8215" Workload="ci--3975--1--1--4--69569a1933.novalocal-k8s-calico--kube--controllers--7548bf8497--6bsv7-eth0" Jul 2 00:36:44.132160 containerd[1444]: 2024-07-02 00:36:44.117 [INFO][5052] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:36:44.132160 containerd[1444]: 2024-07-02 00:36:44.117 [INFO][5052] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:36:44.132160 containerd[1444]: 2024-07-02 00:36:44.126 [WARNING][5052] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="f91170ed8c902e281c58992e0605f57f6a6f088fb13f4b3dd2c20e2a999f8215" HandleID="k8s-pod-network.f91170ed8c902e281c58992e0605f57f6a6f088fb13f4b3dd2c20e2a999f8215" Workload="ci--3975--1--1--4--69569a1933.novalocal-k8s-calico--kube--controllers--7548bf8497--6bsv7-eth0" Jul 2 00:36:44.132160 containerd[1444]: 2024-07-02 00:36:44.126 [INFO][5052] ipam_plugin.go 439: Releasing address using workloadID ContainerID="f91170ed8c902e281c58992e0605f57f6a6f088fb13f4b3dd2c20e2a999f8215" HandleID="k8s-pod-network.f91170ed8c902e281c58992e0605f57f6a6f088fb13f4b3dd2c20e2a999f8215" Workload="ci--3975--1--1--4--69569a1933.novalocal-k8s-calico--kube--controllers--7548bf8497--6bsv7-eth0" Jul 2 00:36:44.132160 containerd[1444]: 2024-07-02 00:36:44.128 [INFO][5052] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:36:44.132160 containerd[1444]: 2024-07-02 00:36:44.130 [INFO][5046] k8s.go 621: Teardown processing complete. ContainerID="f91170ed8c902e281c58992e0605f57f6a6f088fb13f4b3dd2c20e2a999f8215" Jul 2 00:36:44.132620 containerd[1444]: time="2024-07-02T00:36:44.132217529Z" level=info msg="TearDown network for sandbox \"f91170ed8c902e281c58992e0605f57f6a6f088fb13f4b3dd2c20e2a999f8215\" successfully" Jul 2 00:36:44.135687 containerd[1444]: time="2024-07-02T00:36:44.135631742Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f91170ed8c902e281c58992e0605f57f6a6f088fb13f4b3dd2c20e2a999f8215\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 00:36:44.135770 containerd[1444]: time="2024-07-02T00:36:44.135700452Z" level=info msg="RemovePodSandbox \"f91170ed8c902e281c58992e0605f57f6a6f088fb13f4b3dd2c20e2a999f8215\" returns successfully" Jul 2 00:36:44.136404 containerd[1444]: time="2024-07-02T00:36:44.136359097Z" level=info msg="StopPodSandbox for \"b798bd5b0c7f3a6d3482321246f594534f1e455e2ba95619bc9840495362877c\"" Jul 2 00:36:44.225238 containerd[1444]: 2024-07-02 00:36:44.179 [WARNING][5070] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b798bd5b0c7f3a6d3482321246f594534f1e455e2ba95619bc9840495362877c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--1--1--4--69569a1933.novalocal-k8s-csi--node--driver--9ksc4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f825e197-24d6-43c1-8001-acbd6a4ca977", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 35, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-1-1-4-69569a1933.novalocal", ContainerID:"d6a6c44db44eecc43ded1cbc7c36325205f99310c206c8caab1bfda17b68b11a", Pod:"csi-node-driver-9ksc4", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.36.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali1931376c1a8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:36:44.225238 containerd[1444]: 2024-07-02 00:36:44.179 [INFO][5070] k8s.go 608: Cleaning up netns ContainerID="b798bd5b0c7f3a6d3482321246f594534f1e455e2ba95619bc9840495362877c" Jul 2 00:36:44.225238 containerd[1444]: 2024-07-02 00:36:44.179 [INFO][5070] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="b798bd5b0c7f3a6d3482321246f594534f1e455e2ba95619bc9840495362877c" iface="eth0" netns="" Jul 2 00:36:44.225238 containerd[1444]: 2024-07-02 00:36:44.179 [INFO][5070] k8s.go 615: Releasing IP address(es) ContainerID="b798bd5b0c7f3a6d3482321246f594534f1e455e2ba95619bc9840495362877c" Jul 2 00:36:44.225238 containerd[1444]: 2024-07-02 00:36:44.179 [INFO][5070] utils.go 188: Calico CNI releasing IP address ContainerID="b798bd5b0c7f3a6d3482321246f594534f1e455e2ba95619bc9840495362877c" Jul 2 00:36:44.225238 containerd[1444]: 2024-07-02 00:36:44.204 [INFO][5076] ipam_plugin.go 411: Releasing address using handleID ContainerID="b798bd5b0c7f3a6d3482321246f594534f1e455e2ba95619bc9840495362877c" HandleID="k8s-pod-network.b798bd5b0c7f3a6d3482321246f594534f1e455e2ba95619bc9840495362877c" Workload="ci--3975--1--1--4--69569a1933.novalocal-k8s-csi--node--driver--9ksc4-eth0" Jul 2 00:36:44.225238 containerd[1444]: 2024-07-02 00:36:44.205 [INFO][5076] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:36:44.225238 containerd[1444]: 2024-07-02 00:36:44.205 [INFO][5076] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:36:44.225238 containerd[1444]: 2024-07-02 00:36:44.216 [WARNING][5076] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="b798bd5b0c7f3a6d3482321246f594534f1e455e2ba95619bc9840495362877c" HandleID="k8s-pod-network.b798bd5b0c7f3a6d3482321246f594534f1e455e2ba95619bc9840495362877c" Workload="ci--3975--1--1--4--69569a1933.novalocal-k8s-csi--node--driver--9ksc4-eth0" Jul 2 00:36:44.225238 containerd[1444]: 2024-07-02 00:36:44.216 [INFO][5076] ipam_plugin.go 439: Releasing address using workloadID ContainerID="b798bd5b0c7f3a6d3482321246f594534f1e455e2ba95619bc9840495362877c" HandleID="k8s-pod-network.b798bd5b0c7f3a6d3482321246f594534f1e455e2ba95619bc9840495362877c" Workload="ci--3975--1--1--4--69569a1933.novalocal-k8s-csi--node--driver--9ksc4-eth0" Jul 2 00:36:44.225238 containerd[1444]: 2024-07-02 00:36:44.221 [INFO][5076] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:36:44.225238 containerd[1444]: 2024-07-02 00:36:44.222 [INFO][5070] k8s.go 621: Teardown processing complete. ContainerID="b798bd5b0c7f3a6d3482321246f594534f1e455e2ba95619bc9840495362877c" Jul 2 00:36:44.226928 containerd[1444]: time="2024-07-02T00:36:44.225310383Z" level=info msg="TearDown network for sandbox \"b798bd5b0c7f3a6d3482321246f594534f1e455e2ba95619bc9840495362877c\" successfully" Jul 2 00:36:44.226928 containerd[1444]: time="2024-07-02T00:36:44.225365097Z" level=info msg="StopPodSandbox for \"b798bd5b0c7f3a6d3482321246f594534f1e455e2ba95619bc9840495362877c\" returns successfully" Jul 2 00:36:44.226928 containerd[1444]: time="2024-07-02T00:36:44.226538464Z" level=info msg="RemovePodSandbox for \"b798bd5b0c7f3a6d3482321246f594534f1e455e2ba95619bc9840495362877c\"" Jul 2 00:36:44.226928 containerd[1444]: time="2024-07-02T00:36:44.226576596Z" level=info msg="Forcibly stopping sandbox \"b798bd5b0c7f3a6d3482321246f594534f1e455e2ba95619bc9840495362877c\"" Jul 2 00:36:44.355697 containerd[1444]: 2024-07-02 00:36:44.297 [WARNING][5094] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b798bd5b0c7f3a6d3482321246f594534f1e455e2ba95619bc9840495362877c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--1--1--4--69569a1933.novalocal-k8s-csi--node--driver--9ksc4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f825e197-24d6-43c1-8001-acbd6a4ca977", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 35, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-1-1-4-69569a1933.novalocal", ContainerID:"d6a6c44db44eecc43ded1cbc7c36325205f99310c206c8caab1bfda17b68b11a", Pod:"csi-node-driver-9ksc4", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.36.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali1931376c1a8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:36:44.355697 containerd[1444]: 2024-07-02 00:36:44.297 [INFO][5094] k8s.go 608: Cleaning up netns ContainerID="b798bd5b0c7f3a6d3482321246f594534f1e455e2ba95619bc9840495362877c" Jul 2 00:36:44.355697 containerd[1444]: 2024-07-02 00:36:44.297 [INFO][5094] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="b798bd5b0c7f3a6d3482321246f594534f1e455e2ba95619bc9840495362877c" iface="eth0" netns="" Jul 2 00:36:44.355697 containerd[1444]: 2024-07-02 00:36:44.297 [INFO][5094] k8s.go 615: Releasing IP address(es) ContainerID="b798bd5b0c7f3a6d3482321246f594534f1e455e2ba95619bc9840495362877c" Jul 2 00:36:44.355697 containerd[1444]: 2024-07-02 00:36:44.297 [INFO][5094] utils.go 188: Calico CNI releasing IP address ContainerID="b798bd5b0c7f3a6d3482321246f594534f1e455e2ba95619bc9840495362877c" Jul 2 00:36:44.355697 containerd[1444]: 2024-07-02 00:36:44.332 [INFO][5100] ipam_plugin.go 411: Releasing address using handleID ContainerID="b798bd5b0c7f3a6d3482321246f594534f1e455e2ba95619bc9840495362877c" HandleID="k8s-pod-network.b798bd5b0c7f3a6d3482321246f594534f1e455e2ba95619bc9840495362877c" Workload="ci--3975--1--1--4--69569a1933.novalocal-k8s-csi--node--driver--9ksc4-eth0" Jul 2 00:36:44.355697 containerd[1444]: 2024-07-02 00:36:44.332 [INFO][5100] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:36:44.355697 containerd[1444]: 2024-07-02 00:36:44.332 [INFO][5100] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:36:44.355697 containerd[1444]: 2024-07-02 00:36:44.347 [WARNING][5100] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="b798bd5b0c7f3a6d3482321246f594534f1e455e2ba95619bc9840495362877c" HandleID="k8s-pod-network.b798bd5b0c7f3a6d3482321246f594534f1e455e2ba95619bc9840495362877c" Workload="ci--3975--1--1--4--69569a1933.novalocal-k8s-csi--node--driver--9ksc4-eth0" Jul 2 00:36:44.355697 containerd[1444]: 2024-07-02 00:36:44.347 [INFO][5100] ipam_plugin.go 439: Releasing address using workloadID ContainerID="b798bd5b0c7f3a6d3482321246f594534f1e455e2ba95619bc9840495362877c" HandleID="k8s-pod-network.b798bd5b0c7f3a6d3482321246f594534f1e455e2ba95619bc9840495362877c" Workload="ci--3975--1--1--4--69569a1933.novalocal-k8s-csi--node--driver--9ksc4-eth0" Jul 2 00:36:44.355697 containerd[1444]: 2024-07-02 00:36:44.350 [INFO][5100] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:36:44.355697 containerd[1444]: 2024-07-02 00:36:44.353 [INFO][5094] k8s.go 621: Teardown processing complete. ContainerID="b798bd5b0c7f3a6d3482321246f594534f1e455e2ba95619bc9840495362877c" Jul 2 00:36:44.356717 containerd[1444]: time="2024-07-02T00:36:44.355788083Z" level=info msg="TearDown network for sandbox \"b798bd5b0c7f3a6d3482321246f594534f1e455e2ba95619bc9840495362877c\" successfully" Jul 2 00:36:44.361955 containerd[1444]: time="2024-07-02T00:36:44.361896458Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b798bd5b0c7f3a6d3482321246f594534f1e455e2ba95619bc9840495362877c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 00:36:44.362307 containerd[1444]: time="2024-07-02T00:36:44.361972001Z" level=info msg="RemovePodSandbox \"b798bd5b0c7f3a6d3482321246f594534f1e455e2ba95619bc9840495362877c\" returns successfully" Jul 2 00:36:44.406578 sshd[4903]: Accepted publickey for core from 172.24.4.1 port 38420 ssh2: RSA SHA256:PCQlpQPF3MQUBJB7FGXO+NVNbcy5KrkTt8QQEvm9Ado Jul 2 00:36:44.413271 sshd[4903]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:36:44.422971 systemd-logind[1432]: New session 20 of user core. Jul 2 00:36:44.428239 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 2 00:36:46.419119 sshd[4903]: pam_unix(sshd:session): session closed for user core Jul 2 00:36:46.429335 systemd[1]: sshd@17-172.24.4.39:22-172.24.4.1:38420.service: Deactivated successfully. Jul 2 00:36:46.433856 systemd[1]: session-20.scope: Deactivated successfully. Jul 2 00:36:46.436283 systemd-logind[1432]: Session 20 logged out. Waiting for processes to exit. Jul 2 00:36:46.446771 systemd[1]: Started sshd@18-172.24.4.39:22-172.24.4.1:50918.service - OpenSSH per-connection server daemon (172.24.4.1:50918). Jul 2 00:36:46.450643 systemd-logind[1432]: Removed session 20. Jul 2 00:36:48.144171 sshd[5116]: Accepted publickey for core from 172.24.4.1 port 50918 ssh2: RSA SHA256:PCQlpQPF3MQUBJB7FGXO+NVNbcy5KrkTt8QQEvm9Ado Jul 2 00:36:48.146741 sshd[5116]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:36:48.156892 systemd-logind[1432]: New session 21 of user core. Jul 2 00:36:48.167562 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 2 00:36:51.543416 sshd[5116]: pam_unix(sshd:session): session closed for user core Jul 2 00:36:51.559850 systemd[1]: Started sshd@19-172.24.4.39:22-172.24.4.1:50922.service - OpenSSH per-connection server daemon (172.24.4.1:50922). Jul 2 00:36:51.564274 systemd[1]: sshd@18-172.24.4.39:22-172.24.4.1:50918.service: Deactivated successfully. Jul 2 00:36:51.568744 systemd[1]: session-21.scope: Deactivated successfully. Jul 2 00:36:51.575492 systemd-logind[1432]: Session 21 logged out. Waiting for processes to exit. Jul 2 00:36:51.584584 systemd-logind[1432]: Removed session 21. Jul 2 00:36:52.755271 sshd[5150]: Accepted publickey for core from 172.24.4.1 port 50922 ssh2: RSA SHA256:PCQlpQPF3MQUBJB7FGXO+NVNbcy5KrkTt8QQEvm9Ado Jul 2 00:36:52.761967 sshd[5150]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:36:52.773003 systemd-logind[1432]: New session 22 of user core. Jul 2 00:36:52.786390 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 2 00:36:54.915700 sshd[5150]: pam_unix(sshd:session): session closed for user core Jul 2 00:36:54.925758 systemd[1]: sshd@19-172.24.4.39:22-172.24.4.1:50922.service: Deactivated successfully. Jul 2 00:36:54.928951 systemd[1]: session-22.scope: Deactivated successfully. Jul 2 00:36:54.933541 systemd-logind[1432]: Session 22 logged out. Waiting for processes to exit. Jul 2 00:36:54.945347 systemd[1]: Started sshd@20-172.24.4.39:22-172.24.4.1:52764.service - OpenSSH per-connection server daemon (172.24.4.1:52764). Jul 2 00:36:54.949635 systemd-logind[1432]: Removed session 22. Jul 2 00:36:55.083338 kubelet[2632]: I0702 00:36:55.083021 2632 topology_manager.go:215] "Topology Admit Handler" podUID="f8124974-cd7d-47e6-8ffd-d782f638c86d" podNamespace="calico-apiserver" podName="calico-apiserver-765bd464f6-g2vrf" Jul 2 00:36:55.103007 systemd[1]: Created slice kubepods-besteffort-podf8124974_cd7d_47e6_8ffd_d782f638c86d.slice - libcontainer container kubepods-besteffort-podf8124974_cd7d_47e6_8ffd_d782f638c86d.slice. Jul 2 00:36:55.157139 kubelet[2632]: I0702 00:36:55.157086 2632 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f8124974-cd7d-47e6-8ffd-d782f638c86d-calico-apiserver-certs\") pod \"calico-apiserver-765bd464f6-g2vrf\" (UID: \"f8124974-cd7d-47e6-8ffd-d782f638c86d\") " pod="calico-apiserver/calico-apiserver-765bd464f6-g2vrf" Jul 2 00:36:55.157139 kubelet[2632]: I0702 00:36:55.157144 2632 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mst7h\" (UniqueName: \"kubernetes.io/projected/f8124974-cd7d-47e6-8ffd-d782f638c86d-kube-api-access-mst7h\") pod \"calico-apiserver-765bd464f6-g2vrf\" (UID: \"f8124974-cd7d-47e6-8ffd-d782f638c86d\") " pod="calico-apiserver/calico-apiserver-765bd464f6-g2vrf" Jul 2 00:36:55.261446 kubelet[2632]: E0702 00:36:55.261272 2632 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Jul 2 00:36:55.309085 kubelet[2632]: E0702 00:36:55.308665 2632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f8124974-cd7d-47e6-8ffd-d782f638c86d-calico-apiserver-certs podName:f8124974-cd7d-47e6-8ffd-d782f638c86d nodeName:}" failed. No retries permitted until 2024-07-02 00:36:55.768097564 +0000 UTC m=+132.651341521 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/f8124974-cd7d-47e6-8ffd-d782f638c86d-calico-apiserver-certs") pod "calico-apiserver-765bd464f6-g2vrf" (UID: "f8124974-cd7d-47e6-8ffd-d782f638c86d") : secret "calico-apiserver-certs" not found Jul 2 00:36:56.011535 containerd[1444]: time="2024-07-02T00:36:56.011350740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-765bd464f6-g2vrf,Uid:f8124974-cd7d-47e6-8ffd-d782f638c86d,Namespace:calico-apiserver,Attempt:0,}" Jul 2 00:36:56.089296 sshd[5168]: Accepted publickey for core from 172.24.4.1 port 52764 ssh2: RSA SHA256:PCQlpQPF3MQUBJB7FGXO+NVNbcy5KrkTt8QQEvm9Ado Jul 2 00:36:56.095097 sshd[5168]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:36:56.104498 systemd-logind[1432]: New session 23 of user core. Jul 2 00:36:56.112439 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 2 00:36:56.380221 systemd-networkd[1365]: calif9636d22a99: Link UP Jul 2 00:36:56.382250 systemd-networkd[1365]: calif9636d22a99: Gained carrier Jul 2 00:36:56.408921 containerd[1444]: 2024-07-02 00:36:56.171 [INFO][5176] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975--1--1--4--69569a1933.novalocal-k8s-calico--apiserver--765bd464f6--g2vrf-eth0 calico-apiserver-765bd464f6- calico-apiserver f8124974-cd7d-47e6-8ffd-d782f638c86d 1177 0 2024-07-02 00:36:55 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:765bd464f6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3975-1-1-4-69569a1933.novalocal calico-apiserver-765bd464f6-g2vrf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif9636d22a99 [] []}} ContainerID="a328b836488c599616eb056f87c2326077b350370002125f9bbbaf573bef4c72" Namespace="calico-apiserver" Pod="calico-apiserver-765bd464f6-g2vrf" WorkloadEndpoint="ci--3975--1--1--4--69569a1933.novalocal-k8s-calico--apiserver--765bd464f6--g2vrf-" Jul 2 00:36:56.408921 containerd[1444]: 2024-07-02 00:36:56.171 [INFO][5176] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a328b836488c599616eb056f87c2326077b350370002125f9bbbaf573bef4c72" Namespace="calico-apiserver" Pod="calico-apiserver-765bd464f6-g2vrf" WorkloadEndpoint="ci--3975--1--1--4--69569a1933.novalocal-k8s-calico--apiserver--765bd464f6--g2vrf-eth0" Jul 2 00:36:56.408921 containerd[1444]: 2024-07-02 00:36:56.221 [INFO][5188] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a328b836488c599616eb056f87c2326077b350370002125f9bbbaf573bef4c72" HandleID="k8s-pod-network.a328b836488c599616eb056f87c2326077b350370002125f9bbbaf573bef4c72" Workload="ci--3975--1--1--4--69569a1933.novalocal-k8s-calico--apiserver--765bd464f6--g2vrf-eth0" Jul 2 00:36:56.408921 containerd[1444]: 2024-07-02 00:36:56.259 [INFO][5188] ipam_plugin.go 264: Auto assigning IP ContainerID="a328b836488c599616eb056f87c2326077b350370002125f9bbbaf573bef4c72" HandleID="k8s-pod-network.a328b836488c599616eb056f87c2326077b350370002125f9bbbaf573bef4c72" Workload="ci--3975--1--1--4--69569a1933.novalocal-k8s-calico--apiserver--765bd464f6--g2vrf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290a20), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3975-1-1-4-69569a1933.novalocal", "pod":"calico-apiserver-765bd464f6-g2vrf", "timestamp":"2024-07-02 00:36:56.221400573 +0000 UTC"}, Hostname:"ci-3975-1-1-4-69569a1933.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 00:36:56.408921 containerd[1444]: 2024-07-02 00:36:56.259 [INFO][5188] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:36:56.408921 containerd[1444]: 2024-07-02 00:36:56.259 [INFO][5188] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:36:56.408921 containerd[1444]: 2024-07-02 00:36:56.259 [INFO][5188] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975-1-1-4-69569a1933.novalocal' Jul 2 00:36:56.408921 containerd[1444]: 2024-07-02 00:36:56.262 [INFO][5188] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a328b836488c599616eb056f87c2326077b350370002125f9bbbaf573bef4c72" host="ci-3975-1-1-4-69569a1933.novalocal" Jul 2 00:36:56.408921 containerd[1444]: 2024-07-02 00:36:56.283 [INFO][5188] ipam.go 372: Looking up existing affinities for host host="ci-3975-1-1-4-69569a1933.novalocal" Jul 2 00:36:56.408921 containerd[1444]: 2024-07-02 00:36:56.335 [INFO][5188] ipam.go 489: Trying affinity for 192.168.36.192/26 host="ci-3975-1-1-4-69569a1933.novalocal" Jul 2 00:36:56.408921 containerd[1444]: 2024-07-02 00:36:56.340 [INFO][5188] ipam.go 155: Attempting to load block cidr=192.168.36.192/26 host="ci-3975-1-1-4-69569a1933.novalocal" Jul 2 00:36:56.408921 containerd[1444]: 2024-07-02 00:36:56.344 [INFO][5188] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.36.192/26 host="ci-3975-1-1-4-69569a1933.novalocal" Jul 2 00:36:56.408921 containerd[1444]: 2024-07-02 00:36:56.344 [INFO][5188] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.36.192/26 handle="k8s-pod-network.a328b836488c599616eb056f87c2326077b350370002125f9bbbaf573bef4c72" host="ci-3975-1-1-4-69569a1933.novalocal" Jul 2 00:36:56.408921 containerd[1444]: 2024-07-02 00:36:56.348 [INFO][5188] ipam.go 1685: Creating new handle: k8s-pod-network.a328b836488c599616eb056f87c2326077b350370002125f9bbbaf573bef4c72 Jul 2 00:36:56.408921 containerd[1444]: 2024-07-02 00:36:56.354 [INFO][5188] ipam.go 1203: Writing block in order to claim IPs block=192.168.36.192/26 handle="k8s-pod-network.a328b836488c599616eb056f87c2326077b350370002125f9bbbaf573bef4c72" host="ci-3975-1-1-4-69569a1933.novalocal" Jul 2 00:36:56.408921 containerd[1444]: 2024-07-02 00:36:56.365 [INFO][5188] ipam.go 1216: Successfully claimed IPs: [192.168.36.197/26] block=192.168.36.192/26 handle="k8s-pod-network.a328b836488c599616eb056f87c2326077b350370002125f9bbbaf573bef4c72" host="ci-3975-1-1-4-69569a1933.novalocal" Jul 2 00:36:56.408921 containerd[1444]: 2024-07-02 00:36:56.365 [INFO][5188] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.36.197/26] handle="k8s-pod-network.a328b836488c599616eb056f87c2326077b350370002125f9bbbaf573bef4c72" host="ci-3975-1-1-4-69569a1933.novalocal" Jul 2 00:36:56.408921 containerd[1444]: 2024-07-02 00:36:56.365 [INFO][5188] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:36:56.408921 containerd[1444]: 2024-07-02 00:36:56.365 [INFO][5188] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.36.197/26] IPv6=[] ContainerID="a328b836488c599616eb056f87c2326077b350370002125f9bbbaf573bef4c72" HandleID="k8s-pod-network.a328b836488c599616eb056f87c2326077b350370002125f9bbbaf573bef4c72" Workload="ci--3975--1--1--4--69569a1933.novalocal-k8s-calico--apiserver--765bd464f6--g2vrf-eth0" Jul 2 00:36:56.413306 containerd[1444]: 2024-07-02 00:36:56.371 [INFO][5176] k8s.go 386: Populated endpoint ContainerID="a328b836488c599616eb056f87c2326077b350370002125f9bbbaf573bef4c72" Namespace="calico-apiserver" Pod="calico-apiserver-765bd464f6-g2vrf" WorkloadEndpoint="ci--3975--1--1--4--69569a1933.novalocal-k8s-calico--apiserver--765bd464f6--g2vrf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--1--1--4--69569a1933.novalocal-k8s-calico--apiserver--765bd464f6--g2vrf-eth0", GenerateName:"calico-apiserver-765bd464f6-", Namespace:"calico-apiserver", SelfLink:"", UID:"f8124974-cd7d-47e6-8ffd-d782f638c86d", ResourceVersion:"1177", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 36, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"765bd464f6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-1-1-4-69569a1933.novalocal", ContainerID:"", Pod:"calico-apiserver-765bd464f6-g2vrf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.36.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif9636d22a99", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:36:56.413306 containerd[1444]: 2024-07-02 00:36:56.373 [INFO][5176] k8s.go 387: Calico CNI using IPs: [192.168.36.197/32] ContainerID="a328b836488c599616eb056f87c2326077b350370002125f9bbbaf573bef4c72" Namespace="calico-apiserver" Pod="calico-apiserver-765bd464f6-g2vrf" WorkloadEndpoint="ci--3975--1--1--4--69569a1933.novalocal-k8s-calico--apiserver--765bd464f6--g2vrf-eth0" Jul 2 00:36:56.413306 containerd[1444]: 2024-07-02 00:36:56.373 [INFO][5176] dataplane_linux.go 68: Setting the host side veth name to calif9636d22a99 ContainerID="a328b836488c599616eb056f87c2326077b350370002125f9bbbaf573bef4c72" Namespace="calico-apiserver" Pod="calico-apiserver-765bd464f6-g2vrf" WorkloadEndpoint="ci--3975--1--1--4--69569a1933.novalocal-k8s-calico--apiserver--765bd464f6--g2vrf-eth0" Jul 2 00:36:56.413306 containerd[1444]: 2024-07-02 00:36:56.381 [INFO][5176] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="a328b836488c599616eb056f87c2326077b350370002125f9bbbaf573bef4c72" Namespace="calico-apiserver" Pod="calico-apiserver-765bd464f6-g2vrf" WorkloadEndpoint="ci--3975--1--1--4--69569a1933.novalocal-k8s-calico--apiserver--765bd464f6--g2vrf-eth0" Jul 2 00:36:56.413306 containerd[1444]: 2024-07-02 00:36:56.382 [INFO][5176] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a328b836488c599616eb056f87c2326077b350370002125f9bbbaf573bef4c72" Namespace="calico-apiserver" Pod="calico-apiserver-765bd464f6-g2vrf" WorkloadEndpoint="ci--3975--1--1--4--69569a1933.novalocal-k8s-calico--apiserver--765bd464f6--g2vrf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--1--1--4--69569a1933.novalocal-k8s-calico--apiserver--765bd464f6--g2vrf-eth0", GenerateName:"calico-apiserver-765bd464f6-", Namespace:"calico-apiserver", SelfLink:"", UID:"f8124974-cd7d-47e6-8ffd-d782f638c86d", ResourceVersion:"1177", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 36, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"765bd464f6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-1-1-4-69569a1933.novalocal", ContainerID:"a328b836488c599616eb056f87c2326077b350370002125f9bbbaf573bef4c72", Pod:"calico-apiserver-765bd464f6-g2vrf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.36.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif9636d22a99", MAC:"7e:b8:66:a9:96:2a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:36:56.413306 containerd[1444]: 2024-07-02 00:36:56.399 [INFO][5176] k8s.go 500: Wrote updated endpoint to datastore ContainerID="a328b836488c599616eb056f87c2326077b350370002125f9bbbaf573bef4c72" Namespace="calico-apiserver" Pod="calico-apiserver-765bd464f6-g2vrf" WorkloadEndpoint="ci--3975--1--1--4--69569a1933.novalocal-k8s-calico--apiserver--765bd464f6--g2vrf-eth0" Jul 2 00:36:56.490808 containerd[1444]: time="2024-07-02T00:36:56.490669999Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:36:56.490808 containerd[1444]: time="2024-07-02T00:36:56.490753267Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:36:56.492242 containerd[1444]: time="2024-07-02T00:36:56.490794836Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:36:56.492242 containerd[1444]: time="2024-07-02T00:36:56.492095133Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:36:56.536242 systemd[1]: run-containerd-runc-k8s.io-a328b836488c599616eb056f87c2326077b350370002125f9bbbaf573bef4c72-runc.sEKowW.mount: Deactivated successfully. Jul 2 00:36:56.549398 systemd[1]: Started cri-containerd-a328b836488c599616eb056f87c2326077b350370002125f9bbbaf573bef4c72.scope - libcontainer container a328b836488c599616eb056f87c2326077b350370002125f9bbbaf573bef4c72. Jul 2 00:36:56.634684 containerd[1444]: time="2024-07-02T00:36:56.634472011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-765bd464f6-g2vrf,Uid:f8124974-cd7d-47e6-8ffd-d782f638c86d,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a328b836488c599616eb056f87c2326077b350370002125f9bbbaf573bef4c72\"" Jul 2 00:36:56.638727 containerd[1444]: time="2024-07-02T00:36:56.638207752Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Jul 2 00:36:57.217444 sshd[5168]: pam_unix(sshd:session): session closed for user core Jul 2 00:36:57.225893 systemd[1]: sshd@20-172.24.4.39:22-172.24.4.1:52764.service: Deactivated successfully. Jul 2 00:36:57.235868 systemd[1]: session-23.scope: Deactivated successfully. Jul 2 00:36:57.238641 systemd-logind[1432]: Session 23 logged out. Waiting for processes to exit. Jul 2 00:36:57.240747 systemd-logind[1432]: Removed session 23. Jul 2 00:36:57.793337 systemd-networkd[1365]: calif9636d22a99: Gained IPv6LL Jul 2 00:37:01.061378 containerd[1444]: time="2024-07-02T00:37:01.061239200Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:37:01.084599 containerd[1444]: time="2024-07-02T00:37:01.083723063Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=40421260" Jul 2 00:37:01.087810 containerd[1444]: time="2024-07-02T00:37:01.087732363Z" level=info msg="ImageCreate event name:\"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:37:01.108531 containerd[1444]: time="2024-07-02T00:37:01.108464165Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:37:01.109428 containerd[1444]: time="2024-07-02T00:37:01.109303871Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"41869036\" in 4.471043479s" Jul 2 00:37:01.109428 containerd[1444]: time="2024-07-02T00:37:01.109336032Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\"" Jul 2 00:37:01.135654 containerd[1444]: time="2024-07-02T00:37:01.135450005Z" level=info msg="CreateContainer within sandbox \"a328b836488c599616eb056f87c2326077b350370002125f9bbbaf573bef4c72\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 2 00:37:01.204994 containerd[1444]: time="2024-07-02T00:37:01.204952471Z" level=info msg="CreateContainer within sandbox \"a328b836488c599616eb056f87c2326077b350370002125f9bbbaf573bef4c72\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a48a3210e6eb0ef7e738cd1df4126b9ea233dcd9a9d86501a748c353636106af\"" Jul 2 00:37:01.206991 containerd[1444]: time="2024-07-02T00:37:01.206002446Z" level=info msg="StartContainer for \"a48a3210e6eb0ef7e738cd1df4126b9ea233dcd9a9d86501a748c353636106af\"" Jul 2 00:37:01.271143 systemd[1]: run-containerd-runc-k8s.io-a48a3210e6eb0ef7e738cd1df4126b9ea233dcd9a9d86501a748c353636106af-runc.VNHRZX.mount: Deactivated successfully. Jul 2 00:37:01.286405 systemd[1]: Started cri-containerd-a48a3210e6eb0ef7e738cd1df4126b9ea233dcd9a9d86501a748c353636106af.scope - libcontainer container a48a3210e6eb0ef7e738cd1df4126b9ea233dcd9a9d86501a748c353636106af. Jul 2 00:37:01.348629 containerd[1444]: time="2024-07-02T00:37:01.348576872Z" level=info msg="StartContainer for \"a48a3210e6eb0ef7e738cd1df4126b9ea233dcd9a9d86501a748c353636106af\" returns successfully" Jul 2 00:37:01.439671 kubelet[2632]: I0702 00:37:01.439580 2632 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-765bd464f6-g2vrf" podStartSLOduration=1.967577072 podStartE2EDuration="6.439518566s" podCreationTimestamp="2024-07-02 00:36:55 +0000 UTC" firstStartedPulling="2024-07-02 00:36:56.637688146 +0000 UTC m=+133.520932103" lastFinishedPulling="2024-07-02 00:37:01.10962964 +0000 UTC m=+137.992873597" observedRunningTime="2024-07-02 00:37:01.439257169 +0000 UTC m=+138.322501156" watchObservedRunningTime="2024-07-02 00:37:01.439518566 +0000 UTC m=+138.322762543" Jul 2 00:37:02.232431 systemd[1]: Started sshd@21-172.24.4.39:22-172.24.4.1:52778.service - OpenSSH per-connection server daemon (172.24.4.1:52778). Jul 2 00:37:03.929028 sshd[5318]: Accepted publickey for core from 172.24.4.1 port 52778 ssh2: RSA SHA256:PCQlpQPF3MQUBJB7FGXO+NVNbcy5KrkTt8QQEvm9Ado Jul 2 00:37:03.933769 sshd[5318]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:37:03.948836 systemd-logind[1432]: New session 24 of user core. Jul 2 00:37:03.953979 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 2 00:37:05.598342 sshd[5318]: pam_unix(sshd:session): session closed for user core Jul 2 00:37:05.604933 systemd[1]: sshd@21-172.24.4.39:22-172.24.4.1:52778.service: Deactivated successfully. Jul 2 00:37:05.609384 systemd[1]: session-24.scope: Deactivated successfully. Jul 2 00:37:05.612871 systemd-logind[1432]: Session 24 logged out. Waiting for processes to exit. Jul 2 00:37:05.615984 systemd-logind[1432]: Removed session 24. Jul 2 00:37:06.075765 systemd[1]: run-containerd-runc-k8s.io-57c9b1c0250aa099e687d112de8a63ad8c3dfffc3aeab5b6144e19df943864d7-runc.zQM8UL.mount: Deactivated successfully. Jul 2 00:37:10.617601 systemd[1]: Started sshd@22-172.24.4.39:22-172.24.4.1:40582.service - OpenSSH per-connection server daemon (172.24.4.1:40582). Jul 2 00:37:11.990121 sshd[5393]: Accepted publickey for core from 172.24.4.1 port 40582 ssh2: RSA SHA256:PCQlpQPF3MQUBJB7FGXO+NVNbcy5KrkTt8QQEvm9Ado Jul 2 00:37:11.992388 sshd[5393]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:37:12.002100 systemd-logind[1432]: New session 25 of user core. Jul 2 00:37:12.010361 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 2 00:37:13.140742 sshd[5393]: pam_unix(sshd:session): session closed for user core Jul 2 00:37:13.148273 systemd[1]: sshd@22-172.24.4.39:22-172.24.4.1:40582.service: Deactivated successfully. Jul 2 00:37:13.152847 systemd[1]: session-25.scope: Deactivated successfully. Jul 2 00:37:13.154920 systemd-logind[1432]: Session 25 logged out. Waiting for processes to exit. Jul 2 00:37:13.158324 systemd-logind[1432]: Removed session 25. Jul 2 00:37:18.164700 systemd[1]: Started sshd@23-172.24.4.39:22-172.24.4.1:59544.service - OpenSSH per-connection server daemon (172.24.4.1:59544). Jul 2 00:37:19.598509 sshd[5413]: Accepted publickey for core from 172.24.4.1 port 59544 ssh2: RSA SHA256:PCQlpQPF3MQUBJB7FGXO+NVNbcy5KrkTt8QQEvm9Ado Jul 2 00:37:19.602767 sshd[5413]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:37:19.615814 systemd-logind[1432]: New session 26 of user core. Jul 2 00:37:19.622658 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 2 00:37:20.582092 sshd[5413]: pam_unix(sshd:session): session closed for user core Jul 2 00:37:20.587494 systemd[1]: sshd@23-172.24.4.39:22-172.24.4.1:59544.service: Deactivated successfully. Jul 2 00:37:20.590803 systemd[1]: session-26.scope: Deactivated successfully. Jul 2 00:37:20.592924 systemd-logind[1432]: Session 26 logged out. Waiting for processes to exit. Jul 2 00:37:20.594272 systemd-logind[1432]: Removed session 26. Jul 2 00:37:25.611993 systemd[1]: Started sshd@24-172.24.4.39:22-172.24.4.1:36800.service - OpenSSH per-connection server daemon (172.24.4.1:36800). Jul 2 00:37:26.730178 sshd[5451]: Accepted publickey for core from 172.24.4.1 port 36800 ssh2: RSA SHA256:PCQlpQPF3MQUBJB7FGXO+NVNbcy5KrkTt8QQEvm9Ado Jul 2 00:37:26.731430 sshd[5451]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:37:26.742114 systemd-logind[1432]: New session 27 of user core. Jul 2 00:37:26.752482 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 2 00:37:27.783452 sshd[5451]: pam_unix(sshd:session): session closed for user core Jul 2 00:37:27.796422 systemd[1]: sshd@24-172.24.4.39:22-172.24.4.1:36800.service: Deactivated successfully. Jul 2 00:37:27.801713 systemd[1]: session-27.scope: Deactivated successfully. Jul 2 00:37:27.805241 systemd-logind[1432]: Session 27 logged out. Waiting for processes to exit. Jul 2 00:37:27.807198 systemd-logind[1432]: Removed session 27.