Jun 25 19:06:06.926785 kernel: Linux version 6.6.35-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Tue Jun 25 17:21:28 -00 2024 Jun 25 19:06:06.926806 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=4483672da8ac4c95f5ee13a489103440a13110ce1f63977ab5a6a33d0c137bf8 Jun 25 19:06:06.926818 kernel: BIOS-provided physical RAM map: Jun 25 19:06:06.926825 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jun 25 19:06:06.926832 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jun 25 19:06:06.926840 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jun 25 19:06:06.926848 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Jun 25 19:06:06.926856 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Jun 25 19:06:06.926863 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jun 25 19:06:06.926873 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jun 25 19:06:06.926880 kernel: NX (Execute Disable) protection: active Jun 25 19:06:06.926887 kernel: APIC: Static calls initialized Jun 25 19:06:06.926895 kernel: SMBIOS 2.8 present. Jun 25 19:06:06.926903 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014 Jun 25 19:06:06.926912 kernel: Hypervisor detected: KVM Jun 25 19:06:06.926921 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jun 25 19:06:06.926929 kernel: kvm-clock: using sched offset of 3895244047 cycles Jun 25 19:06:06.926938 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jun 25 19:06:06.926946 kernel: tsc: Detected 1996.249 MHz processor Jun 25 19:06:06.926954 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jun 25 19:06:06.926963 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jun 25 19:06:06.926971 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Jun 25 19:06:06.926979 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jun 25 19:06:06.926988 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jun 25 19:06:06.926997 kernel: ACPI: Early table checksum verification disabled Jun 25 19:06:06.927005 kernel: ACPI: RSDP 0x00000000000F5930 000014 (v00 BOCHS ) Jun 25 19:06:06.927014 kernel: ACPI: RSDT 0x000000007FFE1848 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 19:06:06.927022 kernel: ACPI: FACP 0x000000007FFE172C 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 19:06:06.927030 kernel: ACPI: DSDT 0x000000007FFE0040 0016EC (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 19:06:06.927038 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jun 25 19:06:06.927046 kernel: ACPI: APIC 0x000000007FFE17A0 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 19:06:06.927054 kernel: ACPI: WAET 0x000000007FFE1820 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 19:06:06.927063 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe172c-0x7ffe179f] Jun 25 19:06:06.927073 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe172b] Jun 25 19:06:06.927081 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jun 25 19:06:06.927089 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17a0-0x7ffe181f] Jun 25 19:06:06.927097 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe1820-0x7ffe1847] Jun 25 19:06:06.927105 kernel: No NUMA configuration found Jun 25 19:06:06.927113 kernel: Faking a node at [mem 0x0000000000000000-0x000000007ffdcfff] Jun 25 19:06:06.927121 kernel: NODE_DATA(0) allocated [mem 0x7ffd7000-0x7ffdcfff] Jun 25 19:06:06.927133 kernel: Zone ranges: Jun 25 19:06:06.927143 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jun 25 19:06:06.927151 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdcfff] Jun 25 19:06:06.927160 kernel: Normal empty Jun 25 19:06:06.927168 kernel: Movable zone start for each node Jun 25 19:06:06.927177 kernel: Early memory node ranges Jun 25 19:06:06.927185 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jun 25 19:06:06.927195 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Jun 25 19:06:06.927204 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdcfff] Jun 25 19:06:06.927212 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jun 25 19:06:06.927221 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jun 25 19:06:06.927229 kernel: On node 0, zone DMA32: 35 pages in unavailable ranges Jun 25 19:06:06.927238 kernel: ACPI: PM-Timer IO Port: 0x608 Jun 25 19:06:06.927246 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jun 25 19:06:06.927271 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jun 25 19:06:06.927279 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jun 25 19:06:06.929289 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jun 25 19:06:06.929304 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jun 25 19:06:06.929322 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jun 25 19:06:06.929331 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jun 25 19:06:06.929341 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jun 25 19:06:06.929350 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jun 25 19:06:06.929359 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jun 25 19:06:06.929369 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Jun 25 19:06:06.929378 kernel: Booting paravirtualized kernel on KVM Jun 25 19:06:06.929387 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jun 25 19:06:06.929400 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jun 25 19:06:06.929410 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576 Jun 25 19:06:06.929419 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152 Jun 25 19:06:06.929428 kernel: pcpu-alloc: [0] 0 1 Jun 25 19:06:06.929437 kernel: kvm-guest: PV spinlocks disabled, no host support Jun 25 19:06:06.929448 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=4483672da8ac4c95f5ee13a489103440a13110ce1f63977ab5a6a33d0c137bf8 Jun 25 19:06:06.929458 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 25 19:06:06.929469 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jun 25 19:06:06.929479 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jun 25 19:06:06.929488 kernel: Fallback order for Node 0: 0 Jun 25 19:06:06.929497 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515805 Jun 25 19:06:06.929506 kernel: Policy zone: DMA32 Jun 25 19:06:06.929515 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 25 19:06:06.929525 kernel: Memory: 1965068K/2096620K available (12288K kernel code, 2302K rwdata, 22636K rodata, 49384K init, 1964K bss, 131292K reserved, 0K cma-reserved) Jun 25 19:06:06.929534 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jun 25 19:06:06.929543 kernel: ftrace: allocating 37650 entries in 148 pages Jun 25 19:06:06.929554 kernel: ftrace: allocated 148 pages with 3 groups Jun 25 19:06:06.929563 kernel: Dynamic Preempt: voluntary Jun 25 19:06:06.929572 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 25 19:06:06.929582 kernel: rcu: RCU event tracing is enabled. Jun 25 19:06:06.929591 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jun 25 19:06:06.929601 kernel: Trampoline variant of Tasks RCU enabled. Jun 25 19:06:06.929610 kernel: Rude variant of Tasks RCU enabled. Jun 25 19:06:06.929619 kernel: Tracing variant of Tasks RCU enabled. Jun 25 19:06:06.929629 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 25 19:06:06.929638 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jun 25 19:06:06.929649 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jun 25 19:06:06.929659 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 25 19:06:06.929668 kernel: Console: colour VGA+ 80x25 Jun 25 19:06:06.929677 kernel: printk: console [tty0] enabled Jun 25 19:06:06.929686 kernel: printk: console [ttyS0] enabled Jun 25 19:06:06.929695 kernel: ACPI: Core revision 20230628 Jun 25 19:06:06.929704 kernel: APIC: Switch to symmetric I/O mode setup Jun 25 19:06:06.929714 kernel: x2apic enabled Jun 25 19:06:06.929723 kernel: APIC: Switched APIC routing to: physical x2apic Jun 25 19:06:06.929734 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jun 25 19:06:06.929744 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jun 25 19:06:06.929754 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Jun 25 19:06:06.929763 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jun 25 19:06:06.929774 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jun 25 19:06:06.929784 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jun 25 19:06:06.929792 kernel: Spectre V2 : Mitigation: Retpolines Jun 25 19:06:06.929801 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jun 25 19:06:06.929810 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jun 25 19:06:06.929821 kernel: Speculative Store Bypass: Vulnerable Jun 25 19:06:06.929829 kernel: x86/fpu: x87 FPU will use FXSAVE Jun 25 19:06:06.929838 kernel: Freeing SMP alternatives memory: 32K Jun 25 19:06:06.929846 kernel: pid_max: default: 32768 minimum: 301 Jun 25 19:06:06.929855 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Jun 25 19:06:06.929864 kernel: SELinux: Initializing. Jun 25 19:06:06.929872 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jun 25 19:06:06.929881 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jun 25 19:06:06.929898 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Jun 25 19:06:06.929907 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jun 25 19:06:06.929917 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jun 25 19:06:06.929927 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jun 25 19:06:06.929937 kernel: Performance Events: AMD PMU driver. Jun 25 19:06:06.929945 kernel: ... version: 0 Jun 25 19:06:06.929955 kernel: ... bit width: 48 Jun 25 19:06:06.929964 kernel: ... generic registers: 4 Jun 25 19:06:06.929975 kernel: ... value mask: 0000ffffffffffff Jun 25 19:06:06.929984 kernel: ... max period: 00007fffffffffff Jun 25 19:06:06.929993 kernel: ... fixed-purpose events: 0 Jun 25 19:06:06.930002 kernel: ... event mask: 000000000000000f Jun 25 19:06:06.930011 kernel: signal: max sigframe size: 1440 Jun 25 19:06:06.930021 kernel: rcu: Hierarchical SRCU implementation. Jun 25 19:06:06.930030 kernel: rcu: Max phase no-delay instances is 400. Jun 25 19:06:06.930039 kernel: smp: Bringing up secondary CPUs ... Jun 25 19:06:06.930048 kernel: smpboot: x86: Booting SMP configuration: Jun 25 19:06:06.930057 kernel: .... node #0, CPUs: #1 Jun 25 19:06:06.930068 kernel: smp: Brought up 1 node, 2 CPUs Jun 25 19:06:06.930077 kernel: smpboot: Max logical packages: 2 Jun 25 19:06:06.930086 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Jun 25 19:06:06.930095 kernel: devtmpfs: initialized Jun 25 19:06:06.930104 kernel: x86/mm: Memory block size: 128MB Jun 25 19:06:06.930113 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 25 19:06:06.930123 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jun 25 19:06:06.930132 kernel: pinctrl core: initialized pinctrl subsystem Jun 25 19:06:06.930141 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 25 19:06:06.930152 kernel: audit: initializing netlink subsys (disabled) Jun 25 19:06:06.930161 kernel: audit: type=2000 audit(1719342366.378:1): state=initialized audit_enabled=0 res=1 Jun 25 19:06:06.930170 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 25 19:06:06.930180 kernel: thermal_sys: Registered thermal governor 'user_space' Jun 25 19:06:06.930189 kernel: cpuidle: using governor menu Jun 25 19:06:06.930198 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 25 19:06:06.930207 kernel: dca service started, version 1.12.1 Jun 25 19:06:06.930216 kernel: PCI: Using configuration type 1 for base access Jun 25 19:06:06.930225 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jun 25 19:06:06.930237 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 25 19:06:06.930246 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jun 25 19:06:06.930276 kernel: ACPI: Added _OSI(Module Device) Jun 25 19:06:06.930285 kernel: ACPI: Added _OSI(Processor Device) Jun 25 19:06:06.930295 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jun 25 19:06:06.930304 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 25 19:06:06.930313 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 25 19:06:06.930322 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jun 25 19:06:06.930331 kernel: ACPI: Interpreter enabled Jun 25 19:06:06.930343 kernel: ACPI: PM: (supports S0 S3 S5) Jun 25 19:06:06.930352 kernel: ACPI: Using IOAPIC for interrupt routing Jun 25 19:06:06.930361 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jun 25 19:06:06.930371 kernel: PCI: Using E820 reservations for host bridge windows Jun 25 19:06:06.930380 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jun 25 19:06:06.930389 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jun 25 19:06:06.930532 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jun 25 19:06:06.930628 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jun 25 19:06:06.930722 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jun 25 19:06:06.930736 kernel: acpiphp: Slot [3] registered Jun 25 19:06:06.930745 kernel: acpiphp: Slot [4] registered Jun 25 19:06:06.930755 kernel: acpiphp: Slot [5] registered Jun 25 19:06:06.930764 kernel: acpiphp: Slot [6] registered Jun 25 19:06:06.930773 kernel: acpiphp: Slot [7] registered Jun 25 19:06:06.930782 kernel: acpiphp: Slot [8] registered Jun 25 19:06:06.930791 kernel: acpiphp: Slot [9] registered Jun 25 19:06:06.930803 kernel: acpiphp: Slot [10] registered Jun 25 19:06:06.930812 kernel: acpiphp: Slot [11] registered Jun 25 19:06:06.930821 kernel: acpiphp: Slot [12] registered Jun 25 19:06:06.930830 kernel: acpiphp: Slot [13] registered Jun 25 19:06:06.930839 kernel: acpiphp: Slot [14] registered Jun 25 19:06:06.930848 kernel: acpiphp: Slot [15] registered Jun 25 19:06:06.930857 kernel: acpiphp: Slot [16] registered Jun 25 19:06:06.930866 kernel: acpiphp: Slot [17] registered Jun 25 19:06:06.930875 kernel: acpiphp: Slot [18] registered Jun 25 19:06:06.930884 kernel: acpiphp: Slot [19] registered Jun 25 19:06:06.930895 kernel: acpiphp: Slot [20] registered Jun 25 19:06:06.930904 kernel: acpiphp: Slot [21] registered Jun 25 19:06:06.930913 kernel: acpiphp: Slot [22] registered Jun 25 19:06:06.930922 kernel: acpiphp: Slot [23] registered Jun 25 19:06:06.930931 kernel: acpiphp: Slot [24] registered Jun 25 19:06:06.930940 kernel: acpiphp: Slot [25] registered Jun 25 19:06:06.930949 kernel: acpiphp: Slot [26] registered Jun 25 19:06:06.930958 kernel: acpiphp: Slot [27] registered Jun 25 19:06:06.930967 kernel: acpiphp: Slot [28] registered Jun 25 19:06:06.930978 kernel: acpiphp: Slot [29] registered Jun 25 19:06:06.930987 kernel: acpiphp: Slot [30] registered Jun 25 19:06:06.930996 kernel: acpiphp: Slot [31] registered Jun 25 19:06:06.931005 kernel: PCI host bridge to bus 0000:00 Jun 25 19:06:06.931096 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jun 25 19:06:06.931179 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jun 25 19:06:06.931285 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jun 25 19:06:06.931368 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jun 25 19:06:06.931467 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jun 25 19:06:06.931546 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jun 25 19:06:06.931650 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jun 25 19:06:06.931748 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jun 25 19:06:06.931845 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jun 25 19:06:06.931935 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Jun 25 19:06:06.932031 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jun 25 19:06:06.932120 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jun 25 19:06:06.932210 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jun 25 19:06:06.932321 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jun 25 19:06:06.932419 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jun 25 19:06:06.932509 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jun 25 19:06:06.932596 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jun 25 19:06:06.932700 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jun 25 19:06:06.932791 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jun 25 19:06:06.932880 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Jun 25 19:06:06.932971 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Jun 25 19:06:06.933098 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Jun 25 19:06:06.933194 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jun 25 19:06:06.933323 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jun 25 19:06:06.933417 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Jun 25 19:06:06.933507 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Jun 25 19:06:06.933596 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Jun 25 19:06:06.933686 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Jun 25 19:06:06.933782 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jun 25 19:06:06.933885 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jun 25 19:06:06.933991 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Jun 25 19:06:06.934088 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Jun 25 19:06:06.934193 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Jun 25 19:06:06.935841 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Jun 25 19:06:06.935942 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Jun 25 19:06:06.936041 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Jun 25 19:06:06.936135 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Jun 25 19:06:06.936232 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Jun 25 19:06:06.936246 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jun 25 19:06:06.936272 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jun 25 19:06:06.936282 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jun 25 19:06:06.936291 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jun 25 19:06:06.936301 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jun 25 19:06:06.936310 kernel: iommu: Default domain type: Translated Jun 25 19:06:06.936319 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jun 25 19:06:06.936328 kernel: PCI: Using ACPI for IRQ routing Jun 25 19:06:06.936341 kernel: PCI: pci_cache_line_size set to 64 bytes Jun 25 19:06:06.936350 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jun 25 19:06:06.936359 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Jun 25 19:06:06.936454 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jun 25 19:06:06.936545 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jun 25 19:06:06.936636 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jun 25 19:06:06.936649 kernel: vgaarb: loaded Jun 25 19:06:06.936659 kernel: clocksource: Switched to clocksource kvm-clock Jun 25 19:06:06.936669 kernel: VFS: Disk quotas dquot_6.6.0 Jun 25 19:06:06.936681 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 25 19:06:06.936691 kernel: pnp: PnP ACPI init Jun 25 19:06:06.936781 kernel: pnp 00:03: [dma 2] Jun 25 19:06:06.936796 kernel: pnp: PnP ACPI: found 5 devices Jun 25 19:06:06.936805 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jun 25 19:06:06.936815 kernel: NET: Registered PF_INET protocol family Jun 25 19:06:06.936824 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jun 25 19:06:06.936833 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jun 25 19:06:06.936846 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 25 19:06:06.936855 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jun 25 19:06:06.936865 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jun 25 19:06:06.936874 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jun 25 19:06:06.936883 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jun 25 19:06:06.936893 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jun 25 19:06:06.936902 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 25 19:06:06.936911 kernel: NET: Registered PF_XDP protocol family Jun 25 19:06:06.936990 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jun 25 19:06:06.937077 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jun 25 19:06:06.937157 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jun 25 19:06:06.937236 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jun 25 19:06:06.937336 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jun 25 19:06:06.937428 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jun 25 19:06:06.937520 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jun 25 19:06:06.937534 kernel: PCI: CLS 0 bytes, default 64 Jun 25 19:06:06.937547 kernel: Initialise system trusted keyrings Jun 25 19:06:06.937557 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jun 25 19:06:06.937566 kernel: Key type asymmetric registered Jun 25 19:06:06.937575 kernel: Asymmetric key parser 'x509' registered Jun 25 19:06:06.937584 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jun 25 19:06:06.937594 kernel: io scheduler mq-deadline registered Jun 25 19:06:06.937603 kernel: io scheduler kyber registered Jun 25 19:06:06.937612 kernel: io scheduler bfq registered Jun 25 19:06:06.937621 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jun 25 19:06:06.937633 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jun 25 19:06:06.937642 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jun 25 19:06:06.937652 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jun 25 19:06:06.937661 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jun 25 19:06:06.937670 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 25 19:06:06.937680 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jun 25 19:06:06.937689 kernel: random: crng init done Jun 25 19:06:06.937698 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jun 25 19:06:06.937707 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jun 25 19:06:06.937717 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jun 25 19:06:06.937808 kernel: rtc_cmos 00:04: RTC can wake from S4 Jun 25 19:06:06.937822 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jun 25 19:06:06.937902 kernel: rtc_cmos 00:04: registered as rtc0 Jun 25 19:06:06.937984 kernel: rtc_cmos 00:04: setting system clock to 2024-06-25T19:06:06 UTC (1719342366) Jun 25 19:06:06.938065 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jun 25 19:06:06.938078 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jun 25 19:06:06.938087 kernel: NET: Registered PF_INET6 protocol family Jun 25 19:06:06.938099 kernel: Segment Routing with IPv6 Jun 25 19:06:06.938109 kernel: In-situ OAM (IOAM) with IPv6 Jun 25 19:06:06.938118 kernel: NET: Registered PF_PACKET protocol family Jun 25 19:06:06.938127 kernel: Key type dns_resolver registered Jun 25 19:06:06.938136 kernel: IPI shorthand broadcast: enabled Jun 25 19:06:06.938145 kernel: sched_clock: Marking stable (967007359, 123669019)->(1094579714, -3903336) Jun 25 19:06:06.938154 kernel: registered taskstats version 1 Jun 25 19:06:06.938164 kernel: Loading compiled-in X.509 certificates Jun 25 19:06:06.938173 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.35-flatcar: 60204e9db5f484c670a1c92aec37e9a0c4d3ae90' Jun 25 19:06:06.938184 kernel: Key type .fscrypt registered Jun 25 19:06:06.938193 kernel: Key type fscrypt-provisioning registered Jun 25 19:06:06.938202 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 25 19:06:06.938211 kernel: ima: Allocated hash algorithm: sha1 Jun 25 19:06:06.938221 kernel: ima: No architecture policies found Jun 25 19:06:06.938230 kernel: clk: Disabling unused clocks Jun 25 19:06:06.938239 kernel: Freeing unused kernel image (initmem) memory: 49384K Jun 25 19:06:06.938586 kernel: Write protecting the kernel read-only data: 36864k Jun 25 19:06:06.938602 kernel: Freeing unused kernel image (rodata/data gap) memory: 1940K Jun 25 19:06:06.938615 kernel: Run /init as init process Jun 25 19:06:06.938624 kernel: with arguments: Jun 25 19:06:06.938633 kernel: /init Jun 25 19:06:06.938642 kernel: with environment: Jun 25 19:06:06.938650 kernel: HOME=/ Jun 25 19:06:06.938659 kernel: TERM=linux Jun 25 19:06:06.938668 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 25 19:06:06.938680 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jun 25 19:06:06.938694 systemd[1]: Detected virtualization kvm. Jun 25 19:06:06.938704 systemd[1]: Detected architecture x86-64. Jun 25 19:06:06.938713 systemd[1]: Running in initrd. Jun 25 19:06:06.938723 systemd[1]: No hostname configured, using default hostname. Jun 25 19:06:06.938733 systemd[1]: Hostname set to . Jun 25 19:06:06.938743 systemd[1]: Initializing machine ID from VM UUID. Jun 25 19:06:06.938753 systemd[1]: Queued start job for default target initrd.target. Jun 25 19:06:06.938763 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 25 19:06:06.938775 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 19:06:06.938785 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jun 25 19:06:06.938796 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 25 19:06:06.938805 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jun 25 19:06:06.938816 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jun 25 19:06:06.938827 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jun 25 19:06:06.938839 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jun 25 19:06:06.938849 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 25 19:06:06.938859 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 19:06:06.938869 systemd[1]: Reached target paths.target - Path Units. Jun 25 19:06:06.938879 systemd[1]: Reached target slices.target - Slice Units. Jun 25 19:06:06.938897 systemd[1]: Reached target swap.target - Swaps. Jun 25 19:06:06.938909 systemd[1]: Reached target timers.target - Timer Units. Jun 25 19:06:06.938921 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 19:06:06.938931 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 19:06:06.938942 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 25 19:06:06.938952 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jun 25 19:06:06.938962 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 19:06:06.938973 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 19:06:06.938983 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 19:06:06.938993 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 19:06:06.939007 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jun 25 19:06:06.939017 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 19:06:06.939027 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 25 19:06:06.939037 systemd[1]: Starting systemd-fsck-usr.service... Jun 25 19:06:06.939047 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 19:06:06.939057 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 19:06:06.939085 systemd-journald[184]: Collecting audit messages is disabled. Jun 25 19:06:06.939111 systemd-journald[184]: Journal started Jun 25 19:06:06.939134 systemd-journald[184]: Runtime Journal (/run/log/journal/aac75d8f6b21461ca67c6ead6f7e502a) is 4.9M, max 39.3M, 34.4M free. Jun 25 19:06:06.943206 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 19:06:06.947369 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 19:06:06.947815 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jun 25 19:06:06.949377 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 19:06:06.950045 systemd[1]: Finished systemd-fsck-usr.service. Jun 25 19:06:06.958637 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 25 19:06:06.963405 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 19:06:06.981674 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 25 19:06:07.034695 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 25 19:06:07.034731 kernel: Bridge firewalling registered Jun 25 19:06:06.984143 systemd-modules-load[185]: Inserted module 'overlay' Jun 25 19:06:07.025347 systemd-modules-load[185]: Inserted module 'br_netfilter' Jun 25 19:06:07.036728 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 19:06:07.037674 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 19:06:07.038797 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 19:06:07.048424 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 19:06:07.050433 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 19:06:07.053452 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 19:06:07.063015 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 19:06:07.073329 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 19:06:07.074706 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 19:06:07.077008 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 19:06:07.085416 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 25 19:06:07.097935 dracut-cmdline[223]: dracut-dracut-053 Jun 25 19:06:07.100308 dracut-cmdline[223]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=4483672da8ac4c95f5ee13a489103440a13110ce1f63977ab5a6a33d0c137bf8 Jun 25 19:06:07.107543 systemd-resolved[214]: Positive Trust Anchors: Jun 25 19:06:07.111791 systemd-resolved[214]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 19:06:07.111834 systemd-resolved[214]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jun 25 19:06:07.114785 systemd-resolved[214]: Defaulting to hostname 'linux'. Jun 25 19:06:07.115793 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 19:06:07.117533 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 19:06:07.181354 kernel: SCSI subsystem initialized Jun 25 19:06:07.194307 kernel: Loading iSCSI transport class v2.0-870. Jun 25 19:06:07.209518 kernel: iscsi: registered transport (tcp) Jun 25 19:06:07.237692 kernel: iscsi: registered transport (qla4xxx) Jun 25 19:06:07.237755 kernel: QLogic iSCSI HBA Driver Jun 25 19:06:07.293676 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 25 19:06:07.299626 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 25 19:06:07.361622 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 25 19:06:07.361684 kernel: device-mapper: uevent: version 1.0.3 Jun 25 19:06:07.363426 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jun 25 19:06:07.415471 kernel: raid6: sse2x4 gen() 12464 MB/s Jun 25 19:06:07.432372 kernel: raid6: sse2x2 gen() 11923 MB/s Jun 25 19:06:07.449547 kernel: raid6: sse2x1 gen() 9758 MB/s Jun 25 19:06:07.449620 kernel: raid6: using algorithm sse2x4 gen() 12464 MB/s Jun 25 19:06:07.467709 kernel: raid6: .... xor() 6941 MB/s, rmw enabled Jun 25 19:06:07.467777 kernel: raid6: using ssse3x2 recovery algorithm Jun 25 19:06:07.496341 kernel: xor: measuring software checksum speed Jun 25 19:06:07.498378 kernel: prefetch64-sse : 17362 MB/sec Jun 25 19:06:07.498439 kernel: generic_sse : 16871 MB/sec Jun 25 19:06:07.499884 kernel: xor: using function: prefetch64-sse (17362 MB/sec) Jun 25 19:06:07.707338 kernel: Btrfs loaded, zoned=no, fsverity=no Jun 25 19:06:07.723873 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 25 19:06:07.730607 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 19:06:07.745956 systemd-udevd[405]: Using default interface naming scheme 'v255'. Jun 25 19:06:07.750569 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 19:06:07.761550 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 25 19:06:07.780383 dracut-pre-trigger[416]: rd.md=0: removing MD RAID activation Jun 25 19:06:07.824499 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 19:06:07.840571 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 19:06:07.884389 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 19:06:07.893531 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 25 19:06:07.933555 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 25 19:06:07.937734 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 19:06:07.939876 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 19:06:07.942317 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 19:06:07.950441 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 25 19:06:07.966583 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 25 19:06:07.982273 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues Jun 25 19:06:08.009596 kernel: virtio_blk virtio2: [vda] 41943040 512-byte logical blocks (21.5 GB/20.0 GiB) Jun 25 19:06:08.009721 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jun 25 19:06:08.009737 kernel: GPT:17805311 != 41943039 Jun 25 19:06:08.009749 kernel: GPT:Alternate GPT header not at the end of the disk. Jun 25 19:06:08.009762 kernel: GPT:17805311 != 41943039 Jun 25 19:06:08.009780 kernel: GPT: Use GNU Parted to correct GPT errors. Jun 25 19:06:08.009792 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 25 19:06:07.993978 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 25 19:06:07.994077 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 19:06:07.994963 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 19:06:07.995578 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 19:06:07.995627 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 19:06:07.996170 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 19:06:08.003564 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 19:06:08.044283 kernel: BTRFS: device fsid 329ce27e-ea89-47b5-8f8b-f762c8412eb0 devid 1 transid 31 /dev/vda3 scanned by (udev-worker) (451) Jun 25 19:06:08.052315 kernel: libata version 3.00 loaded. Jun 25 19:06:08.055280 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (452) Jun 25 19:06:08.064278 kernel: ata_piix 0000:00:01.1: version 2.13 Jun 25 19:06:08.068659 kernel: scsi host0: ata_piix Jun 25 19:06:08.068805 kernel: scsi host1: ata_piix Jun 25 19:06:08.068926 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Jun 25 19:06:08.068940 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Jun 25 19:06:08.066390 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jun 25 19:06:08.107238 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jun 25 19:06:08.108199 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 19:06:08.113976 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jun 25 19:06:08.114637 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jun 25 19:06:08.121593 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jun 25 19:06:08.130372 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 25 19:06:08.132969 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 19:06:08.144020 disk-uuid[501]: Primary Header is updated. Jun 25 19:06:08.144020 disk-uuid[501]: Secondary Entries is updated. Jun 25 19:06:08.144020 disk-uuid[501]: Secondary Header is updated. Jun 25 19:06:08.153287 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 25 19:06:08.155865 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 19:06:08.161282 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 25 19:06:09.175692 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 25 19:06:09.178248 disk-uuid[504]: The operation has completed successfully. Jun 25 19:06:09.256495 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 25 19:06:09.256638 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 25 19:06:09.280388 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 25 19:06:09.289720 sh[524]: Success Jun 25 19:06:09.325332 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Jun 25 19:06:09.382890 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 25 19:06:09.393469 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 25 19:06:09.394272 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 25 19:06:09.413712 kernel: BTRFS info (device dm-0): first mount of filesystem 329ce27e-ea89-47b5-8f8b-f762c8412eb0 Jun 25 19:06:09.413791 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jun 25 19:06:09.418692 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jun 25 19:06:09.420964 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jun 25 19:06:09.421008 kernel: BTRFS info (device dm-0): using free space tree Jun 25 19:06:09.434244 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 25 19:06:09.435204 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 25 19:06:09.442383 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 25 19:06:09.445600 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 25 19:06:09.470780 kernel: BTRFS info (device vda6): first mount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 19:06:09.470869 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 25 19:06:09.470901 kernel: BTRFS info (device vda6): using free space tree Jun 25 19:06:09.480311 kernel: BTRFS info (device vda6): auto enabling async discard Jun 25 19:06:09.493794 systemd[1]: mnt-oem.mount: Deactivated successfully. Jun 25 19:06:09.495365 kernel: BTRFS info (device vda6): last unmount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 19:06:09.510383 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 25 19:06:09.522213 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 25 19:06:09.559505 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 19:06:09.566391 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 19:06:09.599609 systemd-networkd[706]: lo: Link UP Jun 25 19:06:09.600204 systemd-networkd[706]: lo: Gained carrier Jun 25 19:06:09.601917 systemd-networkd[706]: Enumeration completed Jun 25 19:06:09.602373 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 19:06:09.602957 systemd-networkd[706]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 19:06:09.602961 systemd-networkd[706]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 19:06:09.603925 systemd-networkd[706]: eth0: Link UP Jun 25 19:06:09.603928 systemd-networkd[706]: eth0: Gained carrier Jun 25 19:06:09.603935 systemd-networkd[706]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 19:06:09.604147 systemd[1]: Reached target network.target - Network. Jun 25 19:06:09.619321 systemd-networkd[706]: eth0: DHCPv4 address 172.24.4.61/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jun 25 19:06:09.659606 ignition[647]: Ignition 2.19.0 Jun 25 19:06:09.659622 ignition[647]: Stage: fetch-offline Jun 25 19:06:09.661736 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 19:06:09.659670 ignition[647]: no configs at "/usr/lib/ignition/base.d" Jun 25 19:06:09.659681 ignition[647]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jun 25 19:06:09.659786 ignition[647]: parsed url from cmdline: "" Jun 25 19:06:09.659790 ignition[647]: no config URL provided Jun 25 19:06:09.659796 ignition[647]: reading system config file "/usr/lib/ignition/user.ign" Jun 25 19:06:09.659805 ignition[647]: no config at "/usr/lib/ignition/user.ign" Jun 25 19:06:09.659810 ignition[647]: failed to fetch config: resource requires networking Jun 25 19:06:09.660123 ignition[647]: Ignition finished successfully Jun 25 19:06:09.668485 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jun 25 19:06:09.680884 ignition[716]: Ignition 2.19.0 Jun 25 19:06:09.680898 ignition[716]: Stage: fetch Jun 25 19:06:09.681105 ignition[716]: no configs at "/usr/lib/ignition/base.d" Jun 25 19:06:09.681118 ignition[716]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jun 25 19:06:09.681206 ignition[716]: parsed url from cmdline: "" Jun 25 19:06:09.681210 ignition[716]: no config URL provided Jun 25 19:06:09.681216 ignition[716]: reading system config file "/usr/lib/ignition/user.ign" Jun 25 19:06:09.681226 ignition[716]: no config at "/usr/lib/ignition/user.ign" Jun 25 19:06:09.681337 ignition[716]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jun 25 19:06:09.681489 ignition[716]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jun 25 19:06:09.681526 ignition[716]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jun 25 19:06:10.030054 ignition[716]: GET result: OK Jun 25 19:06:10.030227 ignition[716]: parsing config with SHA512: abccd020d9237b935e751b984bf06d82781f25fad9978ff075a47d9e23d3f3533d84994c48e1546dd8e6ec416eda9c5cbc9811ebe02b48998b6ef7aecedebf32 Jun 25 19:06:10.040050 unknown[716]: fetched base config from "system" Jun 25 19:06:10.040078 unknown[716]: fetched base config from "system" Jun 25 19:06:10.040996 ignition[716]: fetch: fetch complete Jun 25 19:06:10.040093 unknown[716]: fetched user config from "openstack" Jun 25 19:06:10.041008 ignition[716]: fetch: fetch passed Jun 25 19:06:10.044708 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jun 25 19:06:10.041095 ignition[716]: Ignition finished successfully Jun 25 19:06:10.054692 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 25 19:06:10.088155 ignition[723]: Ignition 2.19.0 Jun 25 19:06:10.088183 ignition[723]: Stage: kargs Jun 25 19:06:10.088653 ignition[723]: no configs at "/usr/lib/ignition/base.d" Jun 25 19:06:10.088682 ignition[723]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jun 25 19:06:10.093367 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 25 19:06:10.090971 ignition[723]: kargs: kargs passed Jun 25 19:06:10.091070 ignition[723]: Ignition finished successfully Jun 25 19:06:10.103693 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 25 19:06:10.136427 ignition[730]: Ignition 2.19.0 Jun 25 19:06:10.136455 ignition[730]: Stage: disks Jun 25 19:06:10.136912 ignition[730]: no configs at "/usr/lib/ignition/base.d" Jun 25 19:06:10.136955 ignition[730]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jun 25 19:06:10.139499 ignition[730]: disks: disks passed Jun 25 19:06:10.141946 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 25 19:06:10.139605 ignition[730]: Ignition finished successfully Jun 25 19:06:10.145976 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 25 19:06:10.148453 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 25 19:06:10.151345 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 19:06:10.154536 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 19:06:10.157045 systemd[1]: Reached target basic.target - Basic System. Jun 25 19:06:10.167719 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 25 19:06:10.212885 systemd-fsck[739]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jun 25 19:06:10.225019 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 25 19:06:10.233422 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 25 19:06:10.421809 kernel: EXT4-fs (vda9): mounted filesystem ed685e11-963b-427a-9b96-a4691c40e909 r/w with ordered data mode. Quota mode: none. Jun 25 19:06:10.422340 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 25 19:06:10.423449 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 25 19:06:10.443490 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 19:06:10.446643 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 25 19:06:10.450187 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jun 25 19:06:10.455465 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (747) Jun 25 19:06:10.457261 kernel: BTRFS info (device vda6): first mount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 19:06:10.459743 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 25 19:06:10.459768 kernel: BTRFS info (device vda6): using free space tree Jun 25 19:06:10.460711 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jun 25 19:06:10.464409 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 25 19:06:10.465668 kernel: BTRFS info (device vda6): auto enabling async discard Jun 25 19:06:10.464492 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 19:06:10.480050 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 19:06:10.482598 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 25 19:06:10.497634 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 25 19:06:10.667103 initrd-setup-root[775]: cut: /sysroot/etc/passwd: No such file or directory Jun 25 19:06:10.672922 initrd-setup-root[783]: cut: /sysroot/etc/group: No such file or directory Jun 25 19:06:10.678571 initrd-setup-root[790]: cut: /sysroot/etc/shadow: No such file or directory Jun 25 19:06:10.682070 initrd-setup-root[797]: cut: /sysroot/etc/gshadow: No such file or directory Jun 25 19:06:10.857561 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 25 19:06:10.875422 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 25 19:06:10.881603 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 25 19:06:10.896540 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 25 19:06:10.900136 kernel: BTRFS info (device vda6): last unmount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 19:06:10.944530 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 25 19:06:10.951793 ignition[864]: INFO : Ignition 2.19.0 Jun 25 19:06:10.951793 ignition[864]: INFO : Stage: mount Jun 25 19:06:10.953099 ignition[864]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 19:06:10.953099 ignition[864]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jun 25 19:06:10.953099 ignition[864]: INFO : mount: mount passed Jun 25 19:06:10.953099 ignition[864]: INFO : Ignition finished successfully Jun 25 19:06:10.953987 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 25 19:06:11.195871 systemd-networkd[706]: eth0: Gained IPv6LL Jun 25 19:06:17.797697 coreos-metadata[749]: Jun 25 19:06:17.797 WARN failed to locate config-drive, using the metadata service API instead Jun 25 19:06:17.836916 coreos-metadata[749]: Jun 25 19:06:17.836 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jun 25 19:06:17.852705 coreos-metadata[749]: Jun 25 19:06:17.852 INFO Fetch successful Jun 25 19:06:17.854165 coreos-metadata[749]: Jun 25 19:06:17.853 INFO wrote hostname ci-4012-0-0-8-d63f105dc7.novalocal to /sysroot/etc/hostname Jun 25 19:06:17.857445 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jun 25 19:06:17.857753 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jun 25 19:06:17.870523 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 25 19:06:17.911669 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 19:06:17.928351 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (882) Jun 25 19:06:17.935981 kernel: BTRFS info (device vda6): first mount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 19:06:17.936070 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 25 19:06:17.939379 kernel: BTRFS info (device vda6): using free space tree Jun 25 19:06:17.949393 kernel: BTRFS info (device vda6): auto enabling async discard Jun 25 19:06:17.955517 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 19:06:18.004187 ignition[900]: INFO : Ignition 2.19.0 Jun 25 19:06:18.004187 ignition[900]: INFO : Stage: files Jun 25 19:06:18.008080 ignition[900]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 19:06:18.008080 ignition[900]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jun 25 19:06:18.012838 ignition[900]: DEBUG : files: compiled without relabeling support, skipping Jun 25 19:06:18.012838 ignition[900]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 25 19:06:18.012838 ignition[900]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 25 19:06:18.020055 ignition[900]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 25 19:06:18.020055 ignition[900]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 25 19:06:18.020055 ignition[900]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 25 19:06:18.019942 unknown[900]: wrote ssh authorized keys file for user: core Jun 25 19:06:18.029103 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 25 19:06:18.029103 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jun 25 19:06:18.138174 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 25 19:06:18.450276 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 25 19:06:18.450276 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jun 25 19:06:18.450276 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jun 25 19:06:18.450276 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 25 19:06:18.450276 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 25 19:06:18.450276 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 19:06:18.450276 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 19:06:18.450276 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 19:06:18.450276 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 19:06:18.450276 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 19:06:18.450276 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 19:06:18.459904 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jun 25 19:06:18.459904 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jun 25 19:06:18.459904 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jun 25 19:06:18.459904 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-x86-64.raw: attempt #1 Jun 25 19:06:18.979522 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jun 25 19:06:20.565977 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jun 25 19:06:20.565977 ignition[900]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jun 25 19:06:20.569625 ignition[900]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 19:06:20.569625 ignition[900]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 19:06:20.569625 ignition[900]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jun 25 19:06:20.569625 ignition[900]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jun 25 19:06:20.579713 ignition[900]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jun 25 19:06:20.579713 ignition[900]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 25 19:06:20.579713 ignition[900]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 25 19:06:20.579713 ignition[900]: INFO : files: files passed Jun 25 19:06:20.579713 ignition[900]: INFO : Ignition finished successfully Jun 25 19:06:20.572467 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 25 19:06:20.589664 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 25 19:06:20.594419 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 25 19:06:20.604783 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 25 19:06:20.605542 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 25 19:06:20.614468 initrd-setup-root-after-ignition[929]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 19:06:20.615611 initrd-setup-root-after-ignition[929]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 25 19:06:20.617497 initrd-setup-root-after-ignition[933]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 19:06:20.620468 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 19:06:20.622821 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 25 19:06:20.629575 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 25 19:06:20.656452 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 25 19:06:20.656665 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 25 19:06:20.658835 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 25 19:06:20.660935 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 25 19:06:20.662824 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 25 19:06:20.669516 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 25 19:06:20.688360 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 19:06:20.697520 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 25 19:06:20.712416 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 25 19:06:20.713696 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 19:06:20.714324 systemd[1]: Stopped target timers.target - Timer Units. Jun 25 19:06:20.714930 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 25 19:06:20.715051 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 19:06:20.717830 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 25 19:06:20.718853 systemd[1]: Stopped target basic.target - Basic System. Jun 25 19:06:20.720633 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 25 19:06:20.722711 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 19:06:20.724867 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 25 19:06:20.726548 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 25 19:06:20.728576 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 19:06:20.730639 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 25 19:06:20.732696 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 25 19:06:20.734674 systemd[1]: Stopped target swap.target - Swaps. Jun 25 19:06:20.736699 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 25 19:06:20.736811 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 25 19:06:20.739467 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 25 19:06:20.740518 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 25 19:06:20.742109 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jun 25 19:06:20.743324 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 25 19:06:20.744016 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 25 19:06:20.744181 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 25 19:06:20.747079 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 25 19:06:20.747203 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 19:06:20.748192 systemd[1]: ignition-files.service: Deactivated successfully. Jun 25 19:06:20.748325 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 25 19:06:20.756787 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 25 19:06:20.760506 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 25 19:06:20.761052 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 25 19:06:20.761227 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 19:06:20.762944 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 25 19:06:20.763099 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 19:06:20.774322 ignition[953]: INFO : Ignition 2.19.0 Jun 25 19:06:20.774322 ignition[953]: INFO : Stage: umount Jun 25 19:06:20.776670 ignition[953]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 19:06:20.776670 ignition[953]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jun 25 19:06:20.778815 ignition[953]: INFO : umount: umount passed Jun 25 19:06:20.778815 ignition[953]: INFO : Ignition finished successfully Jun 25 19:06:20.777227 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 25 19:06:20.777349 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 25 19:06:20.780914 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 25 19:06:20.783290 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 25 19:06:20.784343 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 25 19:06:20.784428 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 25 19:06:20.786196 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 25 19:06:20.786241 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 25 19:06:20.787337 systemd[1]: ignition-fetch.service: Deactivated successfully. Jun 25 19:06:20.787395 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jun 25 19:06:20.788633 systemd[1]: Stopped target network.target - Network. Jun 25 19:06:20.789748 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 25 19:06:20.789805 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 19:06:20.791404 systemd[1]: Stopped target paths.target - Path Units. Jun 25 19:06:20.792097 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 25 19:06:20.797293 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 19:06:20.797802 systemd[1]: Stopped target slices.target - Slice Units. Jun 25 19:06:20.798226 systemd[1]: Stopped target sockets.target - Socket Units. Jun 25 19:06:20.798735 systemd[1]: iscsid.socket: Deactivated successfully. Jun 25 19:06:20.798779 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 19:06:20.800542 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 25 19:06:20.800593 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 19:06:20.801370 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 25 19:06:20.801419 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 25 19:06:20.802324 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jun 25 19:06:20.802363 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jun 25 19:06:20.803412 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 25 19:06:20.805003 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 25 19:06:20.807083 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 25 19:06:20.808300 systemd-networkd[706]: eth0: DHCPv6 lease lost Jun 25 19:06:20.809878 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 25 19:06:20.809980 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 25 19:06:20.811879 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 25 19:06:20.811911 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 25 19:06:20.820380 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 25 19:06:20.821180 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 25 19:06:20.821233 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 19:06:20.821901 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 19:06:20.822702 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 25 19:06:20.823445 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 25 19:06:20.834339 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 25 19:06:20.834419 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 25 19:06:20.835436 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 25 19:06:20.835480 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 25 19:06:20.836442 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 25 19:06:20.836483 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 19:06:20.837827 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 25 19:06:20.837965 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 19:06:20.839042 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 25 19:06:20.839140 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 25 19:06:20.840739 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 25 19:06:20.840798 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 25 19:06:20.845867 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 25 19:06:20.845899 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 19:06:20.846832 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 25 19:06:20.846873 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 25 19:06:20.848409 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 25 19:06:20.848451 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 25 19:06:20.849431 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 25 19:06:20.849471 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 19:06:20.858405 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 25 19:06:20.859656 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 25 19:06:20.859718 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 19:06:20.860318 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jun 25 19:06:20.860358 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 25 19:06:20.860890 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 25 19:06:20.860930 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 19:06:20.861525 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 19:06:20.861566 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 19:06:20.865102 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 25 19:06:20.865191 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 25 19:06:21.036509 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 25 19:06:21.036745 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 25 19:06:21.040654 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 25 19:06:21.041987 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 25 19:06:21.042107 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 25 19:06:21.052601 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 25 19:06:21.078430 systemd[1]: Switching root. Jun 25 19:06:21.123720 systemd-journald[184]: Journal stopped Jun 25 19:06:22.524822 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Jun 25 19:06:22.524890 kernel: SELinux: policy capability network_peer_controls=1 Jun 25 19:06:22.524909 kernel: SELinux: policy capability open_perms=1 Jun 25 19:06:22.524921 kernel: SELinux: policy capability extended_socket_class=1 Jun 25 19:06:22.524932 kernel: SELinux: policy capability always_check_network=0 Jun 25 19:06:22.524943 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 25 19:06:22.524954 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 25 19:06:22.524966 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 25 19:06:22.524979 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 25 19:06:22.524996 kernel: audit: type=1403 audit(1719342381.495:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 25 19:06:22.525009 systemd[1]: Successfully loaded SELinux policy in 70.310ms. Jun 25 19:06:22.525029 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.472ms. Jun 25 19:06:22.525044 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jun 25 19:06:22.525056 systemd[1]: Detected virtualization kvm. Jun 25 19:06:22.525068 systemd[1]: Detected architecture x86-64. Jun 25 19:06:22.525082 systemd[1]: Detected first boot. Jun 25 19:06:22.525094 systemd[1]: Hostname set to . Jun 25 19:06:22.525106 systemd[1]: Initializing machine ID from VM UUID. Jun 25 19:06:22.525118 zram_generator::config[995]: No configuration found. Jun 25 19:06:22.525130 systemd[1]: Populated /etc with preset unit settings. Jun 25 19:06:22.525143 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jun 25 19:06:22.525156 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jun 25 19:06:22.525168 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jun 25 19:06:22.525190 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 25 19:06:22.525209 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 25 19:06:22.525221 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 25 19:06:22.525233 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 25 19:06:22.525245 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 25 19:06:22.526306 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 25 19:06:22.526323 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 25 19:06:22.526335 systemd[1]: Created slice user.slice - User and Session Slice. Jun 25 19:06:22.526351 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 25 19:06:22.526363 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 19:06:22.526375 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 25 19:06:22.526388 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 25 19:06:22.526400 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 25 19:06:22.526412 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 25 19:06:22.526424 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jun 25 19:06:22.526435 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 25 19:06:22.526447 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jun 25 19:06:22.526461 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jun 25 19:06:22.526473 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jun 25 19:06:22.526485 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 25 19:06:22.526497 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 19:06:22.526508 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 19:06:22.526520 systemd[1]: Reached target slices.target - Slice Units. Jun 25 19:06:22.526532 systemd[1]: Reached target swap.target - Swaps. Jun 25 19:06:22.526546 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 25 19:06:22.526558 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 25 19:06:22.526570 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 19:06:22.526582 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 19:06:22.526594 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 19:06:22.526605 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 25 19:06:22.526617 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 25 19:06:22.526629 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 25 19:06:22.526641 systemd[1]: Mounting media.mount - External Media Directory... Jun 25 19:06:22.526655 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 19:06:22.526667 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 25 19:06:22.526678 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 25 19:06:22.526690 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 25 19:06:22.526702 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 25 19:06:22.526716 systemd[1]: Reached target machines.target - Containers. Jun 25 19:06:22.526728 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 25 19:06:22.526740 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 19:06:22.526754 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 19:06:22.526766 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 25 19:06:22.526777 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 19:06:22.526789 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 19:06:22.526801 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 19:06:22.526812 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 25 19:06:22.526824 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 19:06:22.526836 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 25 19:06:22.526848 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jun 25 19:06:22.526861 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jun 25 19:06:22.526873 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jun 25 19:06:22.526885 systemd[1]: Stopped systemd-fsck-usr.service. Jun 25 19:06:22.526896 kernel: loop: module loaded Jun 25 19:06:22.526907 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 19:06:22.526919 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 19:06:22.526931 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 25 19:06:22.526943 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 25 19:06:22.526955 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 19:06:22.526970 systemd[1]: verity-setup.service: Deactivated successfully. Jun 25 19:06:22.526982 systemd[1]: Stopped verity-setup.service. Jun 25 19:06:22.526994 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 19:06:22.527007 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 25 19:06:22.527018 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 25 19:06:22.527030 systemd[1]: Mounted media.mount - External Media Directory. Jun 25 19:06:22.527041 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 25 19:06:22.527052 kernel: fuse: init (API version 7.39) Jun 25 19:06:22.527066 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 25 19:06:22.527097 systemd-journald[1076]: Collecting audit messages is disabled. Jun 25 19:06:22.527125 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 25 19:06:22.527138 systemd-journald[1076]: Journal started Jun 25 19:06:22.527162 systemd-journald[1076]: Runtime Journal (/run/log/journal/aac75d8f6b21461ca67c6ead6f7e502a) is 4.9M, max 39.3M, 34.4M free. Jun 25 19:06:22.234170 systemd[1]: Queued start job for default target multi-user.target. Jun 25 19:06:22.251482 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jun 25 19:06:22.251890 systemd[1]: systemd-journald.service: Deactivated successfully. Jun 25 19:06:22.533287 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 19:06:22.530285 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 19:06:22.531184 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 25 19:06:22.531500 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 25 19:06:22.532209 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 19:06:22.532432 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 19:06:22.533114 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 19:06:22.533226 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 19:06:22.534088 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 25 19:06:22.534200 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 25 19:06:22.538501 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 19:06:22.538722 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 19:06:22.539587 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 19:06:22.540341 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 25 19:06:22.541082 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 25 19:06:22.558423 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 25 19:06:22.571547 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 25 19:06:22.574354 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 25 19:06:22.574975 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 25 19:06:22.575006 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 19:06:22.578516 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jun 25 19:06:22.586515 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jun 25 19:06:22.590182 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 25 19:06:22.590790 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 19:06:22.593133 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 25 19:06:22.596628 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 25 19:06:22.598327 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 19:06:22.620966 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jun 25 19:06:22.622388 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 19:06:22.626280 kernel: ACPI: bus type drm_connector registered Jun 25 19:06:22.627828 systemd-journald[1076]: Time spent on flushing to /var/log/journal/aac75d8f6b21461ca67c6ead6f7e502a is 59.881ms for 928 entries. Jun 25 19:06:22.627828 systemd-journald[1076]: System Journal (/var/log/journal/aac75d8f6b21461ca67c6ead6f7e502a) is 8.0M, max 584.8M, 576.8M free. Jun 25 19:06:22.702853 systemd-journald[1076]: Received client request to flush runtime journal. Jun 25 19:06:22.702894 kernel: loop0: detected capacity change from 0 to 139760 Jun 25 19:06:22.702909 kernel: block loop0: the capability attribute has been deprecated. Jun 25 19:06:22.626375 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 19:06:22.628492 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 25 19:06:22.631134 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 25 19:06:22.635038 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 25 19:06:22.635828 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 19:06:22.636067 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 19:06:22.637754 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 25 19:06:22.638442 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 25 19:06:22.643023 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jun 25 19:06:22.665455 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jun 25 19:06:22.666434 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 25 19:06:22.674474 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jun 25 19:06:22.689747 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 19:06:22.710878 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 25 19:06:22.719537 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 19:06:22.726553 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jun 25 19:06:22.746775 udevadm[1141]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jun 25 19:06:22.753187 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 25 19:06:22.754559 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jun 25 19:06:22.754950 systemd-tmpfiles[1126]: ACLs are not supported, ignoring. Jun 25 19:06:22.754964 systemd-tmpfiles[1126]: ACLs are not supported, ignoring. Jun 25 19:06:22.766433 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 25 19:06:22.782838 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 25 19:06:22.800213 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 25 19:06:22.818288 kernel: loop1: detected capacity change from 0 to 8 Jun 25 19:06:22.833198 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 25 19:06:22.840397 kernel: loop2: detected capacity change from 0 to 209816 Jun 25 19:06:22.843629 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 19:06:22.863696 systemd-tmpfiles[1150]: ACLs are not supported, ignoring. Jun 25 19:06:22.863722 systemd-tmpfiles[1150]: ACLs are not supported, ignoring. Jun 25 19:06:22.868722 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 19:06:22.910492 kernel: loop3: detected capacity change from 0 to 80568 Jun 25 19:06:22.965146 kernel: loop4: detected capacity change from 0 to 139760 Jun 25 19:06:23.013757 kernel: loop5: detected capacity change from 0 to 8 Jun 25 19:06:23.017280 kernel: loop6: detected capacity change from 0 to 209816 Jun 25 19:06:23.065286 kernel: loop7: detected capacity change from 0 to 80568 Jun 25 19:06:23.089416 (sd-merge)[1155]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jun 25 19:06:23.090164 (sd-merge)[1155]: Merged extensions into '/usr'. Jun 25 19:06:23.110347 systemd[1]: Reloading requested from client PID 1125 ('systemd-sysext') (unit systemd-sysext.service)... Jun 25 19:06:23.110367 systemd[1]: Reloading... Jun 25 19:06:23.222654 zram_generator::config[1179]: No configuration found. Jun 25 19:06:23.435692 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 19:06:23.499465 systemd[1]: Reloading finished in 388 ms. Jun 25 19:06:23.531480 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 25 19:06:23.537433 systemd[1]: Starting ensure-sysext.service... Jun 25 19:06:23.539422 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 19:06:23.570934 ldconfig[1120]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 25 19:06:23.576396 systemd[1]: Reloading requested from client PID 1234 ('systemctl') (unit ensure-sysext.service)... Jun 25 19:06:23.576412 systemd[1]: Reloading... Jun 25 19:06:23.591603 systemd-tmpfiles[1235]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 25 19:06:23.591938 systemd-tmpfiles[1235]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 25 19:06:23.592842 systemd-tmpfiles[1235]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 25 19:06:23.593147 systemd-tmpfiles[1235]: ACLs are not supported, ignoring. Jun 25 19:06:23.593205 systemd-tmpfiles[1235]: ACLs are not supported, ignoring. Jun 25 19:06:23.596964 systemd-tmpfiles[1235]: Detected autofs mount point /boot during canonicalization of boot. Jun 25 19:06:23.596978 systemd-tmpfiles[1235]: Skipping /boot Jun 25 19:06:23.607428 systemd-tmpfiles[1235]: Detected autofs mount point /boot during canonicalization of boot. Jun 25 19:06:23.607441 systemd-tmpfiles[1235]: Skipping /boot Jun 25 19:06:23.656306 zram_generator::config[1264]: No configuration found. Jun 25 19:06:23.797653 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 19:06:23.855989 systemd[1]: Reloading finished in 279 ms. Jun 25 19:06:23.869903 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 25 19:06:23.871042 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 25 19:06:23.877678 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 19:06:23.890430 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 19:06:23.895224 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 25 19:06:23.902513 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 25 19:06:23.908405 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 19:06:23.910349 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 19:06:23.912388 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 25 19:06:23.920916 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 19:06:23.921093 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 19:06:23.930570 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 19:06:23.933239 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 19:06:23.936511 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 19:06:23.937935 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 19:06:23.938067 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 19:06:23.946038 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 19:06:23.946205 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 19:06:23.946389 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 19:06:23.946488 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 19:06:23.949193 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 19:06:23.950655 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 19:06:23.958553 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 19:06:23.959612 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 19:06:23.959854 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 19:06:23.963468 systemd[1]: Finished ensure-sysext.service. Jun 25 19:06:23.976493 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jun 25 19:06:23.991691 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 25 19:06:23.992636 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 25 19:06:23.999812 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 19:06:24.000172 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 19:06:24.011447 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 25 19:06:24.012996 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 25 19:06:24.015779 systemd-udevd[1325]: Using default interface naming scheme 'v255'. Jun 25 19:06:24.016740 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 19:06:24.022004 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 19:06:24.023321 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 19:06:24.023679 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 19:06:24.028176 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 19:06:24.033809 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 19:06:24.035132 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 19:06:24.036570 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 19:06:24.044380 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 25 19:06:24.056941 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 19:06:24.068361 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 19:06:24.077869 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 25 19:06:24.080182 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 25 19:06:24.080623 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 25 19:06:24.091264 augenrules[1378]: No rules Jun 25 19:06:24.094614 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 19:06:24.127289 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1359) Jun 25 19:06:24.179403 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jun 25 19:06:24.180493 systemd[1]: Reached target time-set.target - System Time Set. Jun 25 19:06:24.184887 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jun 25 19:06:24.226947 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1371) Jun 25 19:06:24.234101 systemd-networkd[1361]: lo: Link UP Jun 25 19:06:24.234434 systemd-networkd[1361]: lo: Gained carrier Jun 25 19:06:24.235019 systemd-networkd[1361]: Enumeration completed Jun 25 19:06:24.236146 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 19:06:24.244485 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 25 19:06:24.247911 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jun 25 19:06:24.254474 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 25 19:06:24.258811 systemd-resolved[1324]: Positive Trust Anchors: Jun 25 19:06:24.258821 systemd-resolved[1324]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 19:06:24.258861 systemd-resolved[1324]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jun 25 19:06:24.263854 systemd-resolved[1324]: Using system hostname 'ci-4012-0-0-8-d63f105dc7.novalocal'. Jun 25 19:06:24.265194 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 19:06:24.265866 systemd[1]: Reached target network.target - Network. Jun 25 19:06:24.266798 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 19:06:24.281246 systemd-networkd[1361]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 19:06:24.281493 systemd-networkd[1361]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 19:06:24.282281 systemd-networkd[1361]: eth0: Link UP Jun 25 19:06:24.282355 systemd-networkd[1361]: eth0: Gained carrier Jun 25 19:06:24.282425 systemd-networkd[1361]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 19:06:24.285550 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 25 19:06:24.294403 systemd-networkd[1361]: eth0: DHCPv4 address 172.24.4.61/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jun 25 19:06:24.295454 systemd-timesyncd[1338]: Network configuration changed, trying to establish connection. Jun 25 19:06:24.308288 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jun 25 19:06:24.313300 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jun 25 19:06:24.347276 kernel: ACPI: button: Power Button [PWRF] Jun 25 19:06:24.361316 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jun 25 19:06:24.378287 kernel: mousedev: PS/2 mouse device common for all mice Jun 25 19:06:24.381550 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 19:06:24.402725 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jun 25 19:06:24.402830 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jun 25 19:06:24.406307 kernel: Console: switching to colour dummy device 80x25 Jun 25 19:06:24.407400 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jun 25 19:06:24.407442 kernel: [drm] features: -context_init Jun 25 19:06:24.410084 kernel: [drm] number of scanouts: 1 Jun 25 19:06:24.410154 kernel: [drm] number of cap sets: 0 Jun 25 19:06:24.412784 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jun 25 19:06:24.413539 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 19:06:24.413772 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 19:06:24.426269 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jun 25 19:06:24.426362 kernel: Console: switching to colour frame buffer device 128x48 Jun 25 19:06:24.427584 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 19:06:24.432276 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jun 25 19:06:24.441240 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 19:06:24.442363 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 19:06:24.449441 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 19:06:24.451580 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jun 25 19:06:24.454233 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jun 25 19:06:24.475322 lvm[1413]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 19:06:24.503675 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jun 25 19:06:24.504039 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 19:06:24.510465 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jun 25 19:06:24.515086 lvm[1417]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 19:06:24.522993 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 19:06:24.523313 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 19:06:24.523573 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 25 19:06:24.523737 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 25 19:06:24.524058 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 25 19:06:24.525182 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 25 19:06:24.527500 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 25 19:06:24.527711 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 25 19:06:24.527754 systemd[1]: Reached target paths.target - Path Units. Jun 25 19:06:24.527861 systemd[1]: Reached target timers.target - Timer Units. Jun 25 19:06:24.531288 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 25 19:06:24.532975 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 25 19:06:24.538182 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 25 19:06:24.538873 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 25 19:06:24.543095 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jun 25 19:06:24.545997 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 19:06:24.547986 systemd[1]: Reached target basic.target - Basic System. Jun 25 19:06:24.550025 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 25 19:06:24.550052 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 25 19:06:24.559421 systemd[1]: Starting containerd.service - containerd container runtime... Jun 25 19:06:24.564521 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jun 25 19:06:24.571450 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 25 19:06:24.584381 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 25 19:06:24.588714 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 25 19:06:24.589481 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 25 19:06:24.594439 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 25 19:06:24.598055 jq[1428]: false Jun 25 19:06:24.600436 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 25 19:06:24.604454 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 25 19:06:24.611232 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 25 19:06:24.624974 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 25 19:06:24.627245 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 25 19:06:24.631413 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 25 19:06:24.633498 dbus-daemon[1425]: [system] SELinux support is enabled Jun 25 19:06:24.638437 systemd[1]: Starting update-engine.service - Update Engine... Jun 25 19:06:24.641017 extend-filesystems[1429]: Found loop4 Jun 25 19:06:24.654688 extend-filesystems[1429]: Found loop5 Jun 25 19:06:24.654688 extend-filesystems[1429]: Found loop6 Jun 25 19:06:24.654688 extend-filesystems[1429]: Found loop7 Jun 25 19:06:24.654688 extend-filesystems[1429]: Found vda Jun 25 19:06:24.654688 extend-filesystems[1429]: Found vda1 Jun 25 19:06:24.654688 extend-filesystems[1429]: Found vda2 Jun 25 19:06:24.654688 extend-filesystems[1429]: Found vda3 Jun 25 19:06:24.654688 extend-filesystems[1429]: Found usr Jun 25 19:06:24.654688 extend-filesystems[1429]: Found vda4 Jun 25 19:06:24.654688 extend-filesystems[1429]: Found vda6 Jun 25 19:06:24.654688 extend-filesystems[1429]: Found vda7 Jun 25 19:06:24.654688 extend-filesystems[1429]: Found vda9 Jun 25 19:06:24.654688 extend-filesystems[1429]: Checking size of /dev/vda9 Jun 25 19:06:24.748329 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 4635643 blocks Jun 25 19:06:24.650401 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 25 19:06:24.748433 extend-filesystems[1429]: Resized partition /dev/vda9 Jun 25 19:06:24.659785 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 25 19:06:24.768747 extend-filesystems[1457]: resize2fs 1.47.0 (5-Feb-2023) Jun 25 19:06:24.686628 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 25 19:06:24.787545 update_engine[1439]: I0625 19:06:24.705325 1439 main.cc:92] Flatcar Update Engine starting Jun 25 19:06:24.787545 update_engine[1439]: I0625 19:06:24.721697 1439 update_check_scheduler.cc:74] Next update check in 10m7s Jun 25 19:06:24.686786 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 25 19:06:24.694663 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 25 19:06:24.797774 jq[1440]: true Jun 25 19:06:24.694810 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 25 19:06:24.711301 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 25 19:06:24.801887 jq[1451]: true Jun 25 19:06:24.711351 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 25 19:06:24.739790 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 25 19:06:24.739814 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 25 19:06:24.743762 systemd[1]: motdgen.service: Deactivated successfully. Jun 25 19:06:24.743994 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 25 19:06:24.760568 (ntainerd)[1458]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jun 25 19:06:24.762596 systemd[1]: Started update-engine.service - Update Engine. Jun 25 19:06:24.789868 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 25 19:06:24.809500 tar[1450]: linux-amd64/helm Jun 25 19:06:24.815624 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1358) Jun 25 19:06:24.836361 systemd-logind[1434]: New seat seat0. Jun 25 19:06:24.845942 systemd-logind[1434]: Watching system buttons on /dev/input/event1 (Power Button) Jun 25 19:06:24.845961 systemd-logind[1434]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jun 25 19:06:24.855427 systemd[1]: Started systemd-logind.service - User Login Management. Jun 25 19:06:24.930285 kernel: EXT4-fs (vda9): resized filesystem to 4635643 Jun 25 19:06:24.954264 locksmithd[1462]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 25 19:06:24.992245 extend-filesystems[1457]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jun 25 19:06:24.992245 extend-filesystems[1457]: old_desc_blocks = 1, new_desc_blocks = 3 Jun 25 19:06:24.992245 extend-filesystems[1457]: The filesystem on /dev/vda9 is now 4635643 (4k) blocks long. Jun 25 19:06:25.008933 extend-filesystems[1429]: Resized filesystem in /dev/vda9 Jun 25 19:06:25.016524 bash[1481]: Updated "/home/core/.ssh/authorized_keys" Jun 25 19:06:24.995777 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 25 19:06:24.996108 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 25 19:06:25.006440 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 25 19:06:25.025969 systemd[1]: Starting sshkeys.service... Jun 25 19:06:25.035155 sshd_keygen[1449]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 25 19:06:25.053157 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jun 25 19:06:25.063977 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jun 25 19:06:25.107316 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 25 19:06:25.123180 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 25 19:06:25.146587 systemd[1]: issuegen.service: Deactivated successfully. Jun 25 19:06:25.146764 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 25 19:06:25.157870 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 25 19:06:25.177719 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 25 19:06:25.187866 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 25 19:06:25.198796 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jun 25 19:06:25.200595 systemd[1]: Reached target getty.target - Login Prompts. Jun 25 19:06:25.212539 containerd[1458]: time="2024-06-25T19:06:25.210797977Z" level=info msg="starting containerd" revision=cd7148ac666309abf41fd4a49a8a5895b905e7f3 version=v1.7.18 Jun 25 19:06:25.239983 containerd[1458]: time="2024-06-25T19:06:25.239923372Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jun 25 19:06:25.240151 containerd[1458]: time="2024-06-25T19:06:25.240128286Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jun 25 19:06:25.241858 containerd[1458]: time="2024-06-25T19:06:25.241820730Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.35-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jun 25 19:06:25.241930 containerd[1458]: time="2024-06-25T19:06:25.241910769Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jun 25 19:06:25.242222 containerd[1458]: time="2024-06-25T19:06:25.242198559Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 19:06:25.242318 containerd[1458]: time="2024-06-25T19:06:25.242301261Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jun 25 19:06:25.242671 containerd[1458]: time="2024-06-25T19:06:25.242650557Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jun 25 19:06:25.242808 containerd[1458]: time="2024-06-25T19:06:25.242785530Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 19:06:25.242872 containerd[1458]: time="2024-06-25T19:06:25.242856813Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jun 25 19:06:25.243016 containerd[1458]: time="2024-06-25T19:06:25.242998219Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jun 25 19:06:25.243363 containerd[1458]: time="2024-06-25T19:06:25.243344147Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jun 25 19:06:25.243434 containerd[1458]: time="2024-06-25T19:06:25.243418647Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jun 25 19:06:25.243490 containerd[1458]: time="2024-06-25T19:06:25.243477648Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jun 25 19:06:25.243684 containerd[1458]: time="2024-06-25T19:06:25.243662555Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 19:06:25.243746 containerd[1458]: time="2024-06-25T19:06:25.243732426Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jun 25 19:06:25.243858 containerd[1458]: time="2024-06-25T19:06:25.243839306Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jun 25 19:06:25.243926 containerd[1458]: time="2024-06-25T19:06:25.243912133Z" level=info msg="metadata content store policy set" policy=shared Jun 25 19:06:25.252236 containerd[1458]: time="2024-06-25T19:06:25.252216057Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jun 25 19:06:25.252335 containerd[1458]: time="2024-06-25T19:06:25.252318589Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jun 25 19:06:25.252402 containerd[1458]: time="2024-06-25T19:06:25.252387087Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jun 25 19:06:25.252484 containerd[1458]: time="2024-06-25T19:06:25.252469903Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jun 25 19:06:25.252578 containerd[1458]: time="2024-06-25T19:06:25.252562536Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jun 25 19:06:25.252638 containerd[1458]: time="2024-06-25T19:06:25.252625875Z" level=info msg="NRI interface is disabled by configuration." Jun 25 19:06:25.252716 containerd[1458]: time="2024-06-25T19:06:25.252699513Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jun 25 19:06:25.252884 containerd[1458]: time="2024-06-25T19:06:25.252864974Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jun 25 19:06:25.252978 containerd[1458]: time="2024-06-25T19:06:25.252962747Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jun 25 19:06:25.253383 containerd[1458]: time="2024-06-25T19:06:25.253026697Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jun 25 19:06:25.253383 containerd[1458]: time="2024-06-25T19:06:25.253050682Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jun 25 19:06:25.253383 containerd[1458]: time="2024-06-25T19:06:25.253070439Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jun 25 19:06:25.253383 containerd[1458]: time="2024-06-25T19:06:25.253091669Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jun 25 19:06:25.253383 containerd[1458]: time="2024-06-25T19:06:25.253109853Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jun 25 19:06:25.253383 containerd[1458]: time="2024-06-25T19:06:25.253125562Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jun 25 19:06:25.253383 containerd[1458]: time="2024-06-25T19:06:25.253143706Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jun 25 19:06:25.253383 containerd[1458]: time="2024-06-25T19:06:25.253161099Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jun 25 19:06:25.253383 containerd[1458]: time="2024-06-25T19:06:25.253177079Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jun 25 19:06:25.253383 containerd[1458]: time="2024-06-25T19:06:25.253192117Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jun 25 19:06:25.253383 containerd[1458]: time="2024-06-25T19:06:25.253334073Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jun 25 19:06:25.253901 containerd[1458]: time="2024-06-25T19:06:25.253881380Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jun 25 19:06:25.253995 containerd[1458]: time="2024-06-25T19:06:25.253978983Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jun 25 19:06:25.254142 containerd[1458]: time="2024-06-25T19:06:25.254051759Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jun 25 19:06:25.254142 containerd[1458]: time="2024-06-25T19:06:25.254086254Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jun 25 19:06:25.255277 containerd[1458]: time="2024-06-25T19:06:25.254328529Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jun 25 19:06:25.255277 containerd[1458]: time="2024-06-25T19:06:25.254353415Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jun 25 19:06:25.255277 containerd[1458]: time="2024-06-25T19:06:25.254368293Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jun 25 19:06:25.255277 containerd[1458]: time="2024-06-25T19:06:25.254381899Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jun 25 19:06:25.255277 containerd[1458]: time="2024-06-25T19:06:25.254396416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jun 25 19:06:25.255277 containerd[1458]: time="2024-06-25T19:06:25.254412085Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jun 25 19:06:25.255277 containerd[1458]: time="2024-06-25T19:06:25.254427835Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jun 25 19:06:25.255277 containerd[1458]: time="2024-06-25T19:06:25.254442422Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jun 25 19:06:25.255277 containerd[1458]: time="2024-06-25T19:06:25.254457611Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jun 25 19:06:25.255277 containerd[1458]: time="2024-06-25T19:06:25.254599226Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jun 25 19:06:25.255277 containerd[1458]: time="2024-06-25T19:06:25.254619725Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jun 25 19:06:25.255277 containerd[1458]: time="2024-06-25T19:06:25.254634122Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jun 25 19:06:25.255277 containerd[1458]: time="2024-06-25T19:06:25.254649100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jun 25 19:06:25.255277 containerd[1458]: time="2024-06-25T19:06:25.254663437Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jun 25 19:06:25.255277 containerd[1458]: time="2024-06-25T19:06:25.254681340Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jun 25 19:06:25.255618 containerd[1458]: time="2024-06-25T19:06:25.254702720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jun 25 19:06:25.255618 containerd[1458]: time="2024-06-25T19:06:25.254718560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jun 25 19:06:25.255663 containerd[1458]: time="2024-06-25T19:06:25.255013724Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jun 25 19:06:25.255663 containerd[1458]: time="2024-06-25T19:06:25.255089285Z" level=info msg="Connect containerd service" Jun 25 19:06:25.255663 containerd[1458]: time="2024-06-25T19:06:25.255116025Z" level=info msg="using legacy CRI server" Jun 25 19:06:25.255663 containerd[1458]: time="2024-06-25T19:06:25.255123570Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 25 19:06:25.255663 containerd[1458]: time="2024-06-25T19:06:25.255213999Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jun 25 19:06:25.256579 containerd[1458]: time="2024-06-25T19:06:25.256552690Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 25 19:06:25.256696 containerd[1458]: time="2024-06-25T19:06:25.256678005Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jun 25 19:06:25.256847 containerd[1458]: time="2024-06-25T19:06:25.256826013Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jun 25 19:06:25.256944 containerd[1458]: time="2024-06-25T19:06:25.256928645Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jun 25 19:06:25.257036 containerd[1458]: time="2024-06-25T19:06:25.257018033Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jun 25 19:06:25.257192 containerd[1458]: time="2024-06-25T19:06:25.256785687Z" level=info msg="Start subscribing containerd event" Jun 25 19:06:25.260869 containerd[1458]: time="2024-06-25T19:06:25.257638456Z" level=info msg="Start recovering state" Jun 25 19:06:25.260869 containerd[1458]: time="2024-06-25T19:06:25.257704721Z" level=info msg="Start event monitor" Jun 25 19:06:25.260869 containerd[1458]: time="2024-06-25T19:06:25.257723646Z" level=info msg="Start snapshots syncer" Jun 25 19:06:25.260869 containerd[1458]: time="2024-06-25T19:06:25.257733765Z" level=info msg="Start cni network conf syncer for default" Jun 25 19:06:25.260869 containerd[1458]: time="2024-06-25T19:06:25.257742862Z" level=info msg="Start streaming server" Jun 25 19:06:25.260869 containerd[1458]: time="2024-06-25T19:06:25.257608139Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 25 19:06:25.260869 containerd[1458]: time="2024-06-25T19:06:25.257876773Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 25 19:06:25.258013 systemd[1]: Started containerd.service - containerd container runtime. Jun 25 19:06:25.262291 containerd[1458]: time="2024-06-25T19:06:25.261272402Z" level=info msg="containerd successfully booted in 0.053224s" Jun 25 19:06:25.530364 tar[1450]: linux-amd64/LICENSE Jun 25 19:06:25.530534 tar[1450]: linux-amd64/README.md Jun 25 19:06:25.541978 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 25 19:06:26.107622 systemd-networkd[1361]: eth0: Gained IPv6LL Jun 25 19:06:26.109373 systemd-timesyncd[1338]: Network configuration changed, trying to establish connection. Jun 25 19:06:26.113875 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 25 19:06:26.119659 systemd[1]: Reached target network-online.target - Network is Online. Jun 25 19:06:26.132826 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 19:06:26.147657 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 25 19:06:26.211247 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 25 19:06:27.721570 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 19:06:27.738157 (kubelet)[1539]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 19:06:28.685423 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 25 19:06:28.705246 systemd[1]: Started sshd@0-172.24.4.61:22-172.24.4.1:60648.service - OpenSSH per-connection server daemon (172.24.4.1:60648). Jun 25 19:06:29.059600 kubelet[1539]: E0625 19:06:29.059445 1539 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 19:06:29.065364 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 19:06:29.065730 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 19:06:29.066727 systemd[1]: kubelet.service: Consumed 1.991s CPU time. Jun 25 19:06:29.758528 sshd[1546]: Accepted publickey for core from 172.24.4.1 port 60648 ssh2: RSA SHA256:otxWgi1QNrVHlA+DL2lID1btX/FnfujF3xA/xUdUjyI Jun 25 19:06:29.761456 sshd[1546]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 19:06:29.790040 systemd-logind[1434]: New session 1 of user core. Jun 25 19:06:29.794489 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 25 19:06:29.810054 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 25 19:06:29.843912 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 25 19:06:29.856969 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 25 19:06:29.878803 (systemd)[1552]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 25 19:06:30.044915 systemd[1552]: Queued start job for default target default.target. Jun 25 19:06:30.056283 systemd[1552]: Created slice app.slice - User Application Slice. Jun 25 19:06:30.056311 systemd[1552]: Reached target paths.target - Paths. Jun 25 19:06:30.056326 systemd[1552]: Reached target timers.target - Timers. Jun 25 19:06:30.058617 systemd[1552]: Starting dbus.socket - D-Bus User Message Bus Socket... Jun 25 19:06:30.093153 systemd[1552]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jun 25 19:06:30.093288 systemd[1552]: Reached target sockets.target - Sockets. Jun 25 19:06:30.093304 systemd[1552]: Reached target basic.target - Basic System. Jun 25 19:06:30.093350 systemd[1552]: Reached target default.target - Main User Target. Jun 25 19:06:30.093378 systemd[1552]: Startup finished in 200ms. Jun 25 19:06:30.094366 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 25 19:06:30.102573 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 25 19:06:30.342677 login[1513]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jun 25 19:06:30.352909 login[1514]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jun 25 19:06:30.356816 systemd-logind[1434]: New session 2 of user core. Jun 25 19:06:30.366757 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 25 19:06:30.373926 systemd-logind[1434]: New session 3 of user core. Jun 25 19:06:30.383662 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 25 19:06:30.636956 systemd[1]: Started sshd@1-172.24.4.61:22-172.24.4.1:60658.service - OpenSSH per-connection server daemon (172.24.4.1:60658). Jun 25 19:06:31.643376 coreos-metadata[1424]: Jun 25 19:06:31.643 WARN failed to locate config-drive, using the metadata service API instead Jun 25 19:06:31.696642 coreos-metadata[1424]: Jun 25 19:06:31.696 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jun 25 19:06:31.907426 coreos-metadata[1424]: Jun 25 19:06:31.907 INFO Fetch successful Jun 25 19:06:31.907891 coreos-metadata[1424]: Jun 25 19:06:31.907 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jun 25 19:06:31.922228 coreos-metadata[1424]: Jun 25 19:06:31.922 INFO Fetch successful Jun 25 19:06:31.922228 coreos-metadata[1424]: Jun 25 19:06:31.922 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jun 25 19:06:31.937591 coreos-metadata[1424]: Jun 25 19:06:31.937 INFO Fetch successful Jun 25 19:06:31.937591 coreos-metadata[1424]: Jun 25 19:06:31.937 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jun 25 19:06:31.952340 coreos-metadata[1424]: Jun 25 19:06:31.952 INFO Fetch successful Jun 25 19:06:31.952340 coreos-metadata[1424]: Jun 25 19:06:31.952 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jun 25 19:06:31.967985 coreos-metadata[1424]: Jun 25 19:06:31.967 INFO Fetch successful Jun 25 19:06:31.968117 coreos-metadata[1424]: Jun 25 19:06:31.968 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jun 25 19:06:31.982671 coreos-metadata[1424]: Jun 25 19:06:31.982 INFO Fetch successful Jun 25 19:06:32.035136 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jun 25 19:06:32.036617 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 25 19:06:32.204536 coreos-metadata[1499]: Jun 25 19:06:32.204 WARN failed to locate config-drive, using the metadata service API instead Jun 25 19:06:32.221515 coreos-metadata[1499]: Jun 25 19:06:32.221 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jun 25 19:06:32.239039 coreos-metadata[1499]: Jun 25 19:06:32.238 INFO Fetch successful Jun 25 19:06:32.239039 coreos-metadata[1499]: Jun 25 19:06:32.239 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jun 25 19:06:32.253714 coreos-metadata[1499]: Jun 25 19:06:32.253 INFO Fetch successful Jun 25 19:06:32.259333 unknown[1499]: wrote ssh authorized keys file for user: core Jun 25 19:06:32.303435 update-ssh-keys[1594]: Updated "/home/core/.ssh/authorized_keys" Jun 25 19:06:32.304805 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jun 25 19:06:32.309211 systemd[1]: Finished sshkeys.service. Jun 25 19:06:32.315779 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 25 19:06:32.316176 systemd[1]: Startup finished in 1.104s (kernel) + 14.762s (initrd) + 10.890s (userspace) = 26.757s. Jun 25 19:06:32.612231 sshd[1582]: Accepted publickey for core from 172.24.4.1 port 60658 ssh2: RSA SHA256:otxWgi1QNrVHlA+DL2lID1btX/FnfujF3xA/xUdUjyI Jun 25 19:06:32.615895 sshd[1582]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 19:06:32.627428 systemd-logind[1434]: New session 4 of user core. Jun 25 19:06:32.637545 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 25 19:06:33.296841 sshd[1582]: pam_unix(sshd:session): session closed for user core Jun 25 19:06:33.307693 systemd[1]: sshd@1-172.24.4.61:22-172.24.4.1:60658.service: Deactivated successfully. Jun 25 19:06:33.310809 systemd[1]: session-4.scope: Deactivated successfully. Jun 25 19:06:33.312659 systemd-logind[1434]: Session 4 logged out. Waiting for processes to exit. Jun 25 19:06:33.326891 systemd[1]: Started sshd@2-172.24.4.61:22-172.24.4.1:60670.service - OpenSSH per-connection server daemon (172.24.4.1:60670). Jun 25 19:06:33.329953 systemd-logind[1434]: Removed session 4. Jun 25 19:06:34.599452 sshd[1602]: Accepted publickey for core from 172.24.4.1 port 60670 ssh2: RSA SHA256:otxWgi1QNrVHlA+DL2lID1btX/FnfujF3xA/xUdUjyI Jun 25 19:06:34.602177 sshd[1602]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 19:06:34.611861 systemd-logind[1434]: New session 5 of user core. Jun 25 19:06:34.625768 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 25 19:06:35.283168 sshd[1602]: pam_unix(sshd:session): session closed for user core Jun 25 19:06:35.295006 systemd[1]: sshd@2-172.24.4.61:22-172.24.4.1:60670.service: Deactivated successfully. Jun 25 19:06:35.298219 systemd[1]: session-5.scope: Deactivated successfully. Jun 25 19:06:35.314753 systemd-logind[1434]: Session 5 logged out. Waiting for processes to exit. Jun 25 19:06:35.333900 systemd[1]: Started sshd@3-172.24.4.61:22-172.24.4.1:57972.service - OpenSSH per-connection server daemon (172.24.4.1:57972). Jun 25 19:06:35.337829 systemd-logind[1434]: Removed session 5. Jun 25 19:06:36.585865 sshd[1609]: Accepted publickey for core from 172.24.4.1 port 57972 ssh2: RSA SHA256:otxWgi1QNrVHlA+DL2lID1btX/FnfujF3xA/xUdUjyI Jun 25 19:06:36.588576 sshd[1609]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 19:06:36.599181 systemd-logind[1434]: New session 6 of user core. Jun 25 19:06:36.608539 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 25 19:06:37.421016 sshd[1609]: pam_unix(sshd:session): session closed for user core Jun 25 19:06:37.432817 systemd[1]: sshd@3-172.24.4.61:22-172.24.4.1:57972.service: Deactivated successfully. Jun 25 19:06:37.435930 systemd[1]: session-6.scope: Deactivated successfully. Jun 25 19:06:37.437534 systemd-logind[1434]: Session 6 logged out. Waiting for processes to exit. Jun 25 19:06:37.445813 systemd[1]: Started sshd@4-172.24.4.61:22-172.24.4.1:57980.service - OpenSSH per-connection server daemon (172.24.4.1:57980). Jun 25 19:06:37.449176 systemd-logind[1434]: Removed session 6. Jun 25 19:06:38.737811 sshd[1616]: Accepted publickey for core from 172.24.4.1 port 57980 ssh2: RSA SHA256:otxWgi1QNrVHlA+DL2lID1btX/FnfujF3xA/xUdUjyI Jun 25 19:06:38.740617 sshd[1616]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 19:06:38.752853 systemd-logind[1434]: New session 7 of user core. Jun 25 19:06:38.759713 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 25 19:06:39.189582 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 25 19:06:39.200646 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 19:06:39.399178 sudo[1622]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 25 19:06:39.399872 sudo[1622]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 19:06:39.433130 sudo[1622]: pam_unix(sudo:session): session closed for user root Jun 25 19:06:39.589871 sshd[1616]: pam_unix(sshd:session): session closed for user core Jun 25 19:06:39.604702 systemd[1]: sshd@4-172.24.4.61:22-172.24.4.1:57980.service: Deactivated successfully. Jun 25 19:06:39.610813 systemd[1]: session-7.scope: Deactivated successfully. Jun 25 19:06:39.619005 systemd-logind[1434]: Session 7 logged out. Waiting for processes to exit. Jun 25 19:06:39.620810 systemd[1]: Started sshd@5-172.24.4.61:22-172.24.4.1:57990.service - OpenSSH per-connection server daemon (172.24.4.1:57990). Jun 25 19:06:39.624423 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 19:06:39.628483 systemd-logind[1434]: Removed session 7. Jun 25 19:06:39.634947 (kubelet)[1633]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 19:06:39.705051 kubelet[1633]: E0625 19:06:39.704916 1633 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 19:06:39.712799 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 19:06:39.712964 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 19:06:40.852519 sshd[1632]: Accepted publickey for core from 172.24.4.1 port 57990 ssh2: RSA SHA256:otxWgi1QNrVHlA+DL2lID1btX/FnfujF3xA/xUdUjyI Jun 25 19:06:40.855421 sshd[1632]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 19:06:40.864911 systemd-logind[1434]: New session 8 of user core. Jun 25 19:06:40.875538 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 25 19:06:41.356114 sudo[1645]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 25 19:06:41.356817 sudo[1645]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 19:06:41.364570 sudo[1645]: pam_unix(sudo:session): session closed for user root Jun 25 19:06:41.376418 sudo[1644]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jun 25 19:06:41.377026 sudo[1644]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 19:06:41.406993 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jun 25 19:06:41.412386 auditctl[1648]: No rules Jun 25 19:06:41.413090 systemd[1]: audit-rules.service: Deactivated successfully. Jun 25 19:06:41.413535 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jun 25 19:06:41.422965 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 19:06:41.485669 augenrules[1666]: No rules Jun 25 19:06:41.487009 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 19:06:41.490440 sudo[1644]: pam_unix(sudo:session): session closed for user root Jun 25 19:06:41.685664 sshd[1632]: pam_unix(sshd:session): session closed for user core Jun 25 19:06:41.700227 systemd[1]: sshd@5-172.24.4.61:22-172.24.4.1:57990.service: Deactivated successfully. Jun 25 19:06:41.704124 systemd[1]: session-8.scope: Deactivated successfully. Jun 25 19:06:41.708598 systemd-logind[1434]: Session 8 logged out. Waiting for processes to exit. Jun 25 19:06:41.715835 systemd[1]: Started sshd@6-172.24.4.61:22-172.24.4.1:57996.service - OpenSSH per-connection server daemon (172.24.4.1:57996). Jun 25 19:06:41.718947 systemd-logind[1434]: Removed session 8. Jun 25 19:06:42.799610 sshd[1674]: Accepted publickey for core from 172.24.4.1 port 57996 ssh2: RSA SHA256:otxWgi1QNrVHlA+DL2lID1btX/FnfujF3xA/xUdUjyI Jun 25 19:06:42.802359 sshd[1674]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 19:06:42.813795 systemd-logind[1434]: New session 9 of user core. Jun 25 19:06:42.821561 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 25 19:06:43.305914 sudo[1677]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 25 19:06:43.307381 sudo[1677]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 19:06:43.583682 (dockerd)[1687]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jun 25 19:06:43.584546 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 25 19:06:44.049580 dockerd[1687]: time="2024-06-25T19:06:44.049492774Z" level=info msg="Starting up" Jun 25 19:06:44.081057 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport4169182293-merged.mount: Deactivated successfully. Jun 25 19:06:44.150459 dockerd[1687]: time="2024-06-25T19:06:44.150191735Z" level=info msg="Loading containers: start." Jun 25 19:06:44.326556 kernel: Initializing XFRM netlink socket Jun 25 19:06:44.392101 systemd-timesyncd[1338]: Network configuration changed, trying to establish connection. Jun 25 19:06:45.170878 systemd-resolved[1324]: Clock change detected. Flushing caches. Jun 25 19:06:45.171531 systemd-timesyncd[1338]: Contacted time server 162.159.200.123:123 (2.flatcar.pool.ntp.org). Jun 25 19:06:45.171580 systemd-timesyncd[1338]: Initial clock synchronization to Tue 2024-06-25 19:06:45.170755 UTC. Jun 25 19:06:45.200612 systemd-networkd[1361]: docker0: Link UP Jun 25 19:06:45.227260 dockerd[1687]: time="2024-06-25T19:06:45.227167495Z" level=info msg="Loading containers: done." Jun 25 19:06:45.352548 dockerd[1687]: time="2024-06-25T19:06:45.352475811Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 25 19:06:45.352912 dockerd[1687]: time="2024-06-25T19:06:45.352664024Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jun 25 19:06:45.352912 dockerd[1687]: time="2024-06-25T19:06:45.352793246Z" level=info msg="Daemon has completed initialization" Jun 25 19:06:45.419449 dockerd[1687]: time="2024-06-25T19:06:45.419280763Z" level=info msg="API listen on /run/docker.sock" Jun 25 19:06:45.419745 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 25 19:06:47.062411 containerd[1458]: time="2024-06-25T19:06:47.062299865Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\"" Jun 25 19:06:47.845275 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount764460438.mount: Deactivated successfully. Jun 25 19:06:50.067430 containerd[1458]: time="2024-06-25T19:06:50.067317565Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 19:06:50.069176 containerd[1458]: time="2024-06-25T19:06:50.069147127Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.28.11: active requests=0, bytes read=34605186" Jun 25 19:06:50.070070 containerd[1458]: time="2024-06-25T19:06:50.070046654Z" level=info msg="ImageCreate event name:\"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 19:06:50.073606 containerd[1458]: time="2024-06-25T19:06:50.073578588Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 19:06:50.077111 containerd[1458]: time="2024-06-25T19:06:50.077061851Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.28.11\" with image id \"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\", repo tag \"registry.k8s.io/kube-apiserver:v1.28.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10\", size \"34601978\" in 3.014698876s" Jun 25 19:06:50.077172 containerd[1458]: time="2024-06-25T19:06:50.077117866Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\" returns image reference \"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\"" Jun 25 19:06:50.102773 containerd[1458]: time="2024-06-25T19:06:50.102720523Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\"" Jun 25 19:06:50.669325 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jun 25 19:06:50.681167 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 19:06:50.886713 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 19:06:50.899142 (kubelet)[1889]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 19:06:51.097863 kubelet[1889]: E0625 19:06:51.097131 1889 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 19:06:51.102126 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 19:06:51.102912 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 19:06:52.483905 containerd[1458]: time="2024-06-25T19:06:52.483844641Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 19:06:52.485346 containerd[1458]: time="2024-06-25T19:06:52.485112459Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.28.11: active requests=0, bytes read=31719499" Jun 25 19:06:52.486428 containerd[1458]: time="2024-06-25T19:06:52.486363686Z" level=info msg="ImageCreate event name:\"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 19:06:52.489513 containerd[1458]: time="2024-06-25T19:06:52.489452549Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 19:06:52.490797 containerd[1458]: time="2024-06-25T19:06:52.490653110Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.28.11\" with image id \"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\", repo tag \"registry.k8s.io/kube-controller-manager:v1.28.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09\", size \"33315989\" in 2.387667029s" Jun 25 19:06:52.490797 containerd[1458]: time="2024-06-25T19:06:52.490689970Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\" returns image reference \"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\"" Jun 25 19:06:52.513488 containerd[1458]: time="2024-06-25T19:06:52.513432243Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\"" Jun 25 19:06:54.336801 containerd[1458]: time="2024-06-25T19:06:54.334895747Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 19:06:54.336801 containerd[1458]: time="2024-06-25T19:06:54.336602809Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.28.11: active requests=0, bytes read=16925437" Jun 25 19:06:54.338511 containerd[1458]: time="2024-06-25T19:06:54.338458028Z" level=info msg="ImageCreate event name:\"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 19:06:54.348019 containerd[1458]: time="2024-06-25T19:06:54.347943368Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 19:06:54.351157 containerd[1458]: time="2024-06-25T19:06:54.351095319Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.28.11\" with image id \"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\", repo tag \"registry.k8s.io/kube-scheduler:v1.28.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445\", size \"18522021\" in 1.837609146s" Jun 25 19:06:54.351365 containerd[1458]: time="2024-06-25T19:06:54.351323487Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\" returns image reference \"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\"" Jun 25 19:06:54.397786 containerd[1458]: time="2024-06-25T19:06:54.397675931Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\"" Jun 25 19:06:55.898135 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount446150992.mount: Deactivated successfully. Jun 25 19:06:56.787351 containerd[1458]: time="2024-06-25T19:06:56.787187449Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 19:06:56.790204 containerd[1458]: time="2024-06-25T19:06:56.790115892Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.28.11: active requests=0, bytes read=28118427" Jun 25 19:06:56.792421 containerd[1458]: time="2024-06-25T19:06:56.792362786Z" level=info msg="ImageCreate event name:\"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 19:06:56.799236 containerd[1458]: time="2024-06-25T19:06:56.799163591Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.28.11\" with image id \"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\", repo tag \"registry.k8s.io/kube-proxy:v1.28.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\", size \"28117438\" in 2.401415765s" Jun 25 19:06:56.799375 containerd[1458]: time="2024-06-25T19:06:56.799249191Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\" returns image reference \"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\"" Jun 25 19:06:56.799612 containerd[1458]: time="2024-06-25T19:06:56.799552039Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 19:06:56.852511 containerd[1458]: time="2024-06-25T19:06:56.852387222Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jun 25 19:06:57.468280 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount649987511.mount: Deactivated successfully. Jun 25 19:06:57.479846 containerd[1458]: time="2024-06-25T19:06:57.479692763Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 19:06:57.482710 containerd[1458]: time="2024-06-25T19:06:57.482226655Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Jun 25 19:06:57.484156 containerd[1458]: time="2024-06-25T19:06:57.484090761Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 19:06:57.490636 containerd[1458]: time="2024-06-25T19:06:57.490586524Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 19:06:57.492896 containerd[1458]: time="2024-06-25T19:06:57.492824792Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 640.355396ms" Jun 25 19:06:57.493002 containerd[1458]: time="2024-06-25T19:06:57.492897739Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jun 25 19:06:57.534548 containerd[1458]: time="2024-06-25T19:06:57.534443318Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jun 25 19:06:58.027357 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2203777370.mount: Deactivated successfully. Jun 25 19:07:00.548624 containerd[1458]: time="2024-06-25T19:07:00.548564942Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 19:07:00.550751 containerd[1458]: time="2024-06-25T19:07:00.550709654Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651291" Jun 25 19:07:00.552087 containerd[1458]: time="2024-06-25T19:07:00.552063624Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 19:07:00.555434 containerd[1458]: time="2024-06-25T19:07:00.555411493Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 19:07:00.556848 containerd[1458]: time="2024-06-25T19:07:00.556822459Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 3.022310312s" Jun 25 19:07:00.556946 containerd[1458]: time="2024-06-25T19:07:00.556927817Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jun 25 19:07:00.581193 containerd[1458]: time="2024-06-25T19:07:00.581149735Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Jun 25 19:07:01.006575 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount637567169.mount: Deactivated successfully. Jun 25 19:07:01.169324 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jun 25 19:07:01.179142 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 19:07:01.751103 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 19:07:01.765399 (kubelet)[2001]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 19:07:01.865223 kubelet[2001]: E0625 19:07:01.865176 2001 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 19:07:01.868279 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 19:07:01.868413 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 19:07:02.672267 containerd[1458]: time="2024-06-25T19:07:02.672175904Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 19:07:02.674916 containerd[1458]: time="2024-06-25T19:07:02.674786219Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.10.1: active requests=0, bytes read=16191643" Jun 25 19:07:02.676450 containerd[1458]: time="2024-06-25T19:07:02.676343340Z" level=info msg="ImageCreate event name:\"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 19:07:02.681831 containerd[1458]: time="2024-06-25T19:07:02.681645564Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 19:07:02.684583 containerd[1458]: time="2024-06-25T19:07:02.684188533Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.10.1\" with image id \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\", repo tag \"registry.k8s.io/coredns/coredns:v1.10.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\", size \"16190758\" in 2.102793028s" Jun 25 19:07:02.684583 containerd[1458]: time="2024-06-25T19:07:02.684292088Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" Jun 25 19:07:07.122983 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 19:07:07.152410 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 19:07:07.174307 systemd[1]: Reloading requested from client PID 2072 ('systemctl') (unit session-9.scope)... Jun 25 19:07:07.174322 systemd[1]: Reloading... Jun 25 19:07:07.274820 zram_generator::config[2106]: No configuration found. Jun 25 19:07:07.422716 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 19:07:07.505433 systemd[1]: Reloading finished in 330 ms. Jun 25 19:07:07.560587 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 19:07:07.562448 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 19:07:07.573241 systemd[1]: kubelet.service: Deactivated successfully. Jun 25 19:07:07.573467 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 19:07:07.580343 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 19:07:07.670123 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 19:07:07.673627 (kubelet)[2178]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 25 19:07:08.104018 kubelet[2178]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 19:07:08.104018 kubelet[2178]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 19:07:08.104018 kubelet[2178]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 19:07:08.104018 kubelet[2178]: I0625 19:07:08.103496 2178 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 19:07:09.226286 kubelet[2178]: I0625 19:07:09.226237 2178 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jun 25 19:07:09.226286 kubelet[2178]: I0625 19:07:09.226298 2178 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 19:07:09.226843 kubelet[2178]: I0625 19:07:09.226816 2178 server.go:895] "Client rotation is on, will bootstrap in background" Jun 25 19:07:09.257759 kubelet[2178]: I0625 19:07:09.257716 2178 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 19:07:09.258759 kubelet[2178]: E0625 19:07:09.258698 2178 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.61:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.61:6443: connect: connection refused Jun 25 19:07:09.282038 kubelet[2178]: I0625 19:07:09.282005 2178 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 19:07:09.282572 kubelet[2178]: I0625 19:07:09.282534 2178 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 19:07:09.283020 kubelet[2178]: I0625 19:07:09.282973 2178 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 19:07:09.284278 kubelet[2178]: I0625 19:07:09.284233 2178 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 19:07:09.284318 kubelet[2178]: I0625 19:07:09.284285 2178 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 19:07:09.286205 kubelet[2178]: I0625 19:07:09.286159 2178 state_mem.go:36] "Initialized new in-memory state store" Jun 25 19:07:09.290193 kubelet[2178]: I0625 19:07:09.289729 2178 kubelet.go:393] "Attempting to sync node with API server" Jun 25 19:07:09.290193 kubelet[2178]: I0625 19:07:09.289807 2178 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 19:07:09.290193 kubelet[2178]: I0625 19:07:09.289862 2178 kubelet.go:309] "Adding apiserver pod source" Jun 25 19:07:09.290193 kubelet[2178]: I0625 19:07:09.289886 2178 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 19:07:09.292894 kubelet[2178]: W0625 19:07:09.292814 2178 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.24.4.61:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4012-0-0-8-d63f105dc7.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.61:6443: connect: connection refused Jun 25 19:07:09.293471 kubelet[2178]: E0625 19:07:09.293084 2178 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.61:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4012-0-0-8-d63f105dc7.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.61:6443: connect: connection refused Jun 25 19:07:09.293471 kubelet[2178]: W0625 19:07:09.293253 2178 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.24.4.61:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.61:6443: connect: connection refused Jun 25 19:07:09.293471 kubelet[2178]: E0625 19:07:09.293330 2178 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.61:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.61:6443: connect: connection refused Jun 25 19:07:09.294143 kubelet[2178]: I0625 19:07:09.294112 2178 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.18" apiVersion="v1" Jun 25 19:07:09.301838 kubelet[2178]: W0625 19:07:09.301598 2178 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 25 19:07:09.302869 kubelet[2178]: I0625 19:07:09.302780 2178 server.go:1232] "Started kubelet" Jun 25 19:07:09.307831 kubelet[2178]: I0625 19:07:09.307570 2178 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jun 25 19:07:09.310514 kubelet[2178]: I0625 19:07:09.309001 2178 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 19:07:09.310514 kubelet[2178]: I0625 19:07:09.309111 2178 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 19:07:09.311125 kubelet[2178]: I0625 19:07:09.311092 2178 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 19:07:09.311923 kubelet[2178]: I0625 19:07:09.311891 2178 server.go:462] "Adding debug handlers to kubelet server" Jun 25 19:07:09.316245 kubelet[2178]: E0625 19:07:09.316051 2178 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-4012-0-0-8-d63f105dc7.novalocal.17dc54d479d7c426", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-4012-0-0-8-d63f105dc7.novalocal", UID:"ci-4012-0-0-8-d63f105dc7.novalocal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-4012-0-0-8-d63f105dc7.novalocal"}, FirstTimestamp:time.Date(2024, time.June, 25, 19, 7, 9, 302670374, time.Local), LastTimestamp:time.Date(2024, time.June, 25, 19, 7, 9, 302670374, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ci-4012-0-0-8-d63f105dc7.novalocal"}': 'Post "https://172.24.4.61:6443/api/v1/namespaces/default/events": dial tcp 172.24.4.61:6443: connect: connection refused'(may retry after sleeping) Jun 25 19:07:09.318159 kubelet[2178]: I0625 19:07:09.318128 2178 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 19:07:09.319231 kubelet[2178]: I0625 19:07:09.319194 2178 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jun 25 19:07:09.319497 kubelet[2178]: I0625 19:07:09.319470 2178 reconciler_new.go:29] "Reconciler: start to sync state" Jun 25 19:07:09.320328 kubelet[2178]: W0625 19:07:09.320258 2178 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.24.4.61:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.61:6443: connect: connection refused Jun 25 19:07:09.322133 kubelet[2178]: E0625 19:07:09.322105 2178 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.61:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.61:6443: connect: connection refused Jun 25 19:07:09.322653 kubelet[2178]: E0625 19:07:09.322621 2178 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.61:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4012-0-0-8-d63f105dc7.novalocal?timeout=10s\": dial tcp 172.24.4.61:6443: connect: connection refused" interval="200ms" Jun 25 19:07:09.324827 kubelet[2178]: E0625 19:07:09.324792 2178 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jun 25 19:07:09.325014 kubelet[2178]: E0625 19:07:09.324994 2178 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 25 19:07:09.353866 kubelet[2178]: I0625 19:07:09.353825 2178 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 19:07:09.356561 kubelet[2178]: I0625 19:07:09.355840 2178 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 19:07:09.356561 kubelet[2178]: I0625 19:07:09.356493 2178 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 19:07:09.356561 kubelet[2178]: I0625 19:07:09.356518 2178 kubelet.go:2303] "Starting kubelet main sync loop" Jun 25 19:07:09.356827 kubelet[2178]: E0625 19:07:09.356704 2178 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 25 19:07:09.364818 kubelet[2178]: W0625 19:07:09.364069 2178 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.24.4.61:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.61:6443: connect: connection refused Jun 25 19:07:09.364818 kubelet[2178]: E0625 19:07:09.364116 2178 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.61:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.61:6443: connect: connection refused Jun 25 19:07:09.383703 kubelet[2178]: I0625 19:07:09.383459 2178 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 19:07:09.383703 kubelet[2178]: I0625 19:07:09.383479 2178 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 19:07:09.383703 kubelet[2178]: I0625 19:07:09.383496 2178 state_mem.go:36] "Initialized new in-memory state store" Jun 25 19:07:09.389388 kubelet[2178]: I0625 19:07:09.389300 2178 policy_none.go:49] "None policy: Start" Jun 25 19:07:09.389847 kubelet[2178]: I0625 19:07:09.389832 2178 memory_manager.go:169] "Starting memorymanager" policy="None" Jun 25 19:07:09.389908 kubelet[2178]: I0625 19:07:09.389868 2178 state_mem.go:35] "Initializing new in-memory state store" Jun 25 19:07:09.396998 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jun 25 19:07:09.416796 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jun 25 19:07:09.420172 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jun 25 19:07:09.422165 kubelet[2178]: I0625 19:07:09.421889 2178 kubelet_node_status.go:70] "Attempting to register node" node="ci-4012-0-0-8-d63f105dc7.novalocal" Jun 25 19:07:09.422253 kubelet[2178]: E0625 19:07:09.422230 2178 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.24.4.61:6443/api/v1/nodes\": dial tcp 172.24.4.61:6443: connect: connection refused" node="ci-4012-0-0-8-d63f105dc7.novalocal" Jun 25 19:07:09.427793 kubelet[2178]: I0625 19:07:09.427685 2178 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 19:07:09.428060 kubelet[2178]: I0625 19:07:09.427920 2178 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 19:07:09.429020 kubelet[2178]: E0625 19:07:09.428478 2178 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4012-0-0-8-d63f105dc7.novalocal\" not found" Jun 25 19:07:09.457821 kubelet[2178]: I0625 19:07:09.457726 2178 topology_manager.go:215] "Topology Admit Handler" podUID="cd0b8a0658d44ac974a2879a6ba1fcfd" podNamespace="kube-system" podName="kube-apiserver-ci-4012-0-0-8-d63f105dc7.novalocal" Jun 25 19:07:09.459936 kubelet[2178]: I0625 19:07:09.459896 2178 topology_manager.go:215] "Topology Admit Handler" podUID="75a7fa09ff0449ef8304b67d944119df" podNamespace="kube-system" podName="kube-controller-manager-ci-4012-0-0-8-d63f105dc7.novalocal" Jun 25 19:07:09.461725 kubelet[2178]: I0625 19:07:09.461635 2178 topology_manager.go:215] "Topology Admit Handler" podUID="bde3d11a3a8594628e8ef5cc4cd388fd" podNamespace="kube-system" podName="kube-scheduler-ci-4012-0-0-8-d63f105dc7.novalocal" Jun 25 19:07:09.472599 systemd[1]: Created slice kubepods-burstable-podcd0b8a0658d44ac974a2879a6ba1fcfd.slice - libcontainer container kubepods-burstable-podcd0b8a0658d44ac974a2879a6ba1fcfd.slice. Jun 25 19:07:09.498289 systemd[1]: Created slice kubepods-burstable-pod75a7fa09ff0449ef8304b67d944119df.slice - libcontainer container kubepods-burstable-pod75a7fa09ff0449ef8304b67d944119df.slice. Jun 25 19:07:09.511030 systemd[1]: Created slice kubepods-burstable-podbde3d11a3a8594628e8ef5cc4cd388fd.slice - libcontainer container kubepods-burstable-podbde3d11a3a8594628e8ef5cc4cd388fd.slice. Jun 25 19:07:09.520202 kubelet[2178]: I0625 19:07:09.520172 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/75a7fa09ff0449ef8304b67d944119df-kubeconfig\") pod \"kube-controller-manager-ci-4012-0-0-8-d63f105dc7.novalocal\" (UID: \"75a7fa09ff0449ef8304b67d944119df\") " pod="kube-system/kube-controller-manager-ci-4012-0-0-8-d63f105dc7.novalocal" Jun 25 19:07:09.520529 kubelet[2178]: I0625 19:07:09.520484 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/75a7fa09ff0449ef8304b67d944119df-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4012-0-0-8-d63f105dc7.novalocal\" (UID: \"75a7fa09ff0449ef8304b67d944119df\") " pod="kube-system/kube-controller-manager-ci-4012-0-0-8-d63f105dc7.novalocal" Jun 25 19:07:09.520930 kubelet[2178]: I0625 19:07:09.520907 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cd0b8a0658d44ac974a2879a6ba1fcfd-k8s-certs\") pod \"kube-apiserver-ci-4012-0-0-8-d63f105dc7.novalocal\" (UID: \"cd0b8a0658d44ac974a2879a6ba1fcfd\") " pod="kube-system/kube-apiserver-ci-4012-0-0-8-d63f105dc7.novalocal" Jun 25 19:07:09.521229 kubelet[2178]: I0625 19:07:09.521209 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cd0b8a0658d44ac974a2879a6ba1fcfd-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4012-0-0-8-d63f105dc7.novalocal\" (UID: \"cd0b8a0658d44ac974a2879a6ba1fcfd\") " pod="kube-system/kube-apiserver-ci-4012-0-0-8-d63f105dc7.novalocal" Jun 25 19:07:09.521567 kubelet[2178]: I0625 19:07:09.521486 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/75a7fa09ff0449ef8304b67d944119df-ca-certs\") pod \"kube-controller-manager-ci-4012-0-0-8-d63f105dc7.novalocal\" (UID: \"75a7fa09ff0449ef8304b67d944119df\") " pod="kube-system/kube-controller-manager-ci-4012-0-0-8-d63f105dc7.novalocal" Jun 25 19:07:09.521859 kubelet[2178]: I0625 19:07:09.521714 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/75a7fa09ff0449ef8304b67d944119df-flexvolume-dir\") pod \"kube-controller-manager-ci-4012-0-0-8-d63f105dc7.novalocal\" (UID: \"75a7fa09ff0449ef8304b67d944119df\") " pod="kube-system/kube-controller-manager-ci-4012-0-0-8-d63f105dc7.novalocal" Jun 25 19:07:09.521859 kubelet[2178]: I0625 19:07:09.521826 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cd0b8a0658d44ac974a2879a6ba1fcfd-ca-certs\") pod \"kube-apiserver-ci-4012-0-0-8-d63f105dc7.novalocal\" (UID: \"cd0b8a0658d44ac974a2879a6ba1fcfd\") " pod="kube-system/kube-apiserver-ci-4012-0-0-8-d63f105dc7.novalocal" Jun 25 19:07:09.522311 kubelet[2178]: I0625 19:07:09.522126 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/75a7fa09ff0449ef8304b67d944119df-k8s-certs\") pod \"kube-controller-manager-ci-4012-0-0-8-d63f105dc7.novalocal\" (UID: \"75a7fa09ff0449ef8304b67d944119df\") " pod="kube-system/kube-controller-manager-ci-4012-0-0-8-d63f105dc7.novalocal" Jun 25 19:07:09.522311 kubelet[2178]: I0625 19:07:09.522252 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bde3d11a3a8594628e8ef5cc4cd388fd-kubeconfig\") pod \"kube-scheduler-ci-4012-0-0-8-d63f105dc7.novalocal\" (UID: \"bde3d11a3a8594628e8ef5cc4cd388fd\") " pod="kube-system/kube-scheduler-ci-4012-0-0-8-d63f105dc7.novalocal" Jun 25 19:07:09.523550 kubelet[2178]: E0625 19:07:09.523498 2178 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.61:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4012-0-0-8-d63f105dc7.novalocal?timeout=10s\": dial tcp 172.24.4.61:6443: connect: connection refused" interval="400ms" Jun 25 19:07:09.626065 kubelet[2178]: I0625 19:07:09.626009 2178 kubelet_node_status.go:70] "Attempting to register node" node="ci-4012-0-0-8-d63f105dc7.novalocal" Jun 25 19:07:09.626685 kubelet[2178]: E0625 19:07:09.626635 2178 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.24.4.61:6443/api/v1/nodes\": dial tcp 172.24.4.61:6443: connect: connection refused" node="ci-4012-0-0-8-d63f105dc7.novalocal" Jun 25 19:07:09.794337 containerd[1458]: time="2024-06-25T19:07:09.794129606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4012-0-0-8-d63f105dc7.novalocal,Uid:cd0b8a0658d44ac974a2879a6ba1fcfd,Namespace:kube-system,Attempt:0,}" Jun 25 19:07:09.812198 containerd[1458]: time="2024-06-25T19:07:09.812094700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4012-0-0-8-d63f105dc7.novalocal,Uid:75a7fa09ff0449ef8304b67d944119df,Namespace:kube-system,Attempt:0,}" Jun 25 19:07:09.816012 containerd[1458]: time="2024-06-25T19:07:09.815924082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4012-0-0-8-d63f105dc7.novalocal,Uid:bde3d11a3a8594628e8ef5cc4cd388fd,Namespace:kube-system,Attempt:0,}" Jun 25 19:07:09.924927 kubelet[2178]: E0625 19:07:09.924881 2178 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.61:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4012-0-0-8-d63f105dc7.novalocal?timeout=10s\": dial tcp 172.24.4.61:6443: connect: connection refused" interval="800ms" Jun 25 19:07:10.030430 kubelet[2178]: I0625 19:07:10.030388 2178 kubelet_node_status.go:70] "Attempting to register node" node="ci-4012-0-0-8-d63f105dc7.novalocal" Jun 25 19:07:10.032553 kubelet[2178]: E0625 19:07:10.032483 2178 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.24.4.61:6443/api/v1/nodes\": dial tcp 172.24.4.61:6443: connect: connection refused" node="ci-4012-0-0-8-d63f105dc7.novalocal" Jun 25 19:07:10.288897 update_engine[1439]: I0625 19:07:10.288741 1439 update_attempter.cc:509] Updating boot flags... Jun 25 19:07:10.290767 kubelet[2178]: W0625 19:07:10.289949 2178 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.24.4.61:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4012-0-0-8-d63f105dc7.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.61:6443: connect: connection refused Jun 25 19:07:10.290767 kubelet[2178]: E0625 19:07:10.290060 2178 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.61:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4012-0-0-8-d63f105dc7.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.61:6443: connect: connection refused Jun 25 19:07:10.318499 kubelet[2178]: W0625 19:07:10.318344 2178 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.24.4.61:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.61:6443: connect: connection refused Jun 25 19:07:10.318499 kubelet[2178]: E0625 19:07:10.318453 2178 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.61:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.61:6443: connect: connection refused Jun 25 19:07:10.340921 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (2215) Jun 25 19:07:10.416026 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (2202) Jun 25 19:07:10.471774 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (2202) Jun 25 19:07:10.587726 kubelet[2178]: W0625 19:07:10.587466 2178 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.24.4.61:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.61:6443: connect: connection refused Jun 25 19:07:10.587726 kubelet[2178]: E0625 19:07:10.587600 2178 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.61:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.61:6443: connect: connection refused Jun 25 19:07:10.726315 kubelet[2178]: E0625 19:07:10.726252 2178 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.61:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4012-0-0-8-d63f105dc7.novalocal?timeout=10s\": dial tcp 172.24.4.61:6443: connect: connection refused" interval="1.6s" Jun 25 19:07:10.823965 kubelet[2178]: W0625 19:07:10.823896 2178 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.24.4.61:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.61:6443: connect: connection refused Jun 25 19:07:10.823965 kubelet[2178]: E0625 19:07:10.823974 2178 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.61:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.61:6443: connect: connection refused Jun 25 19:07:10.836716 kubelet[2178]: I0625 19:07:10.836161 2178 kubelet_node_status.go:70] "Attempting to register node" node="ci-4012-0-0-8-d63f105dc7.novalocal" Jun 25 19:07:10.836716 kubelet[2178]: E0625 19:07:10.836633 2178 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.24.4.61:6443/api/v1/nodes\": dial tcp 172.24.4.61:6443: connect: connection refused" node="ci-4012-0-0-8-d63f105dc7.novalocal" Jun 25 19:07:11.136008 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount346989205.mount: Deactivated successfully. Jun 25 19:07:11.148049 containerd[1458]: time="2024-06-25T19:07:11.147963131Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 19:07:11.150312 containerd[1458]: time="2024-06-25T19:07:11.150220274Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 19:07:11.152707 containerd[1458]: time="2024-06-25T19:07:11.152576573Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jun 25 19:07:11.153272 containerd[1458]: time="2024-06-25T19:07:11.153145650Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 19:07:11.153986 containerd[1458]: time="2024-06-25T19:07:11.153605833Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 19:07:11.155717 containerd[1458]: time="2024-06-25T19:07:11.155635761Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 19:07:11.156839 containerd[1458]: time="2024-06-25T19:07:11.156671162Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 19:07:11.163428 containerd[1458]: time="2024-06-25T19:07:11.163221508Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 19:07:11.168568 containerd[1458]: time="2024-06-25T19:07:11.167798031Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.351716875s" Jun 25 19:07:11.177235 containerd[1458]: time="2024-06-25T19:07:11.177107701Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.364832373s" Jun 25 19:07:11.177595 containerd[1458]: time="2024-06-25T19:07:11.177496130Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.376210482s" Jun 25 19:07:11.347928 kubelet[2178]: E0625 19:07:11.347879 2178 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.61:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.61:6443: connect: connection refused Jun 25 19:07:11.415873 containerd[1458]: time="2024-06-25T19:07:11.415144850Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 19:07:11.415873 containerd[1458]: time="2024-06-25T19:07:11.415229318Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 19:07:11.415873 containerd[1458]: time="2024-06-25T19:07:11.415318365Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 19:07:11.415873 containerd[1458]: time="2024-06-25T19:07:11.415338132Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 19:07:11.423381 containerd[1458]: time="2024-06-25T19:07:11.423146577Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 19:07:11.423381 containerd[1458]: time="2024-06-25T19:07:11.423207101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 19:07:11.423381 containerd[1458]: time="2024-06-25T19:07:11.423249159Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 19:07:11.423381 containerd[1458]: time="2024-06-25T19:07:11.423281029Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 19:07:11.425902 containerd[1458]: time="2024-06-25T19:07:11.425604437Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 19:07:11.427762 containerd[1458]: time="2024-06-25T19:07:11.426488976Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 19:07:11.427762 containerd[1458]: time="2024-06-25T19:07:11.426544249Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 19:07:11.427762 containerd[1458]: time="2024-06-25T19:07:11.426684122Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 19:07:11.453837 systemd[1]: Started cri-containerd-cfd90fd4da3d7a7dfddf274cccb683fa5be1482176cbef682236fb2630435c19.scope - libcontainer container cfd90fd4da3d7a7dfddf274cccb683fa5be1482176cbef682236fb2630435c19. Jun 25 19:07:11.470915 systemd[1]: Started cri-containerd-8e200526ed40ceb8aca759762181c20beede22e231c155456a40cef68a9c69d7.scope - libcontainer container 8e200526ed40ceb8aca759762181c20beede22e231c155456a40cef68a9c69d7. Jun 25 19:07:11.472947 systemd[1]: Started cri-containerd-b7cc8bc5cd171abfd9aa179353c0522f1de503f186c0d334212c80e16ba8afa9.scope - libcontainer container b7cc8bc5cd171abfd9aa179353c0522f1de503f186c0d334212c80e16ba8afa9. Jun 25 19:07:11.542929 containerd[1458]: time="2024-06-25T19:07:11.542858978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4012-0-0-8-d63f105dc7.novalocal,Uid:cd0b8a0658d44ac974a2879a6ba1fcfd,Namespace:kube-system,Attempt:0,} returns sandbox id \"b7cc8bc5cd171abfd9aa179353c0522f1de503f186c0d334212c80e16ba8afa9\"" Jun 25 19:07:11.552650 containerd[1458]: time="2024-06-25T19:07:11.552596070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4012-0-0-8-d63f105dc7.novalocal,Uid:bde3d11a3a8594628e8ef5cc4cd388fd,Namespace:kube-system,Attempt:0,} returns sandbox id \"cfd90fd4da3d7a7dfddf274cccb683fa5be1482176cbef682236fb2630435c19\"" Jun 25 19:07:11.555492 containerd[1458]: time="2024-06-25T19:07:11.555467014Z" level=info msg="CreateContainer within sandbox \"b7cc8bc5cd171abfd9aa179353c0522f1de503f186c0d334212c80e16ba8afa9\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 25 19:07:11.558193 containerd[1458]: time="2024-06-25T19:07:11.556803130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4012-0-0-8-d63f105dc7.novalocal,Uid:75a7fa09ff0449ef8304b67d944119df,Namespace:kube-system,Attempt:0,} returns sandbox id \"8e200526ed40ceb8aca759762181c20beede22e231c155456a40cef68a9c69d7\"" Jun 25 19:07:11.565250 containerd[1458]: time="2024-06-25T19:07:11.565217691Z" level=info msg="CreateContainer within sandbox \"cfd90fd4da3d7a7dfddf274cccb683fa5be1482176cbef682236fb2630435c19\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 25 19:07:11.565551 containerd[1458]: time="2024-06-25T19:07:11.565269829Z" level=info msg="CreateContainer within sandbox \"8e200526ed40ceb8aca759762181c20beede22e231c155456a40cef68a9c69d7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 25 19:07:11.601391 containerd[1458]: time="2024-06-25T19:07:11.601356490Z" level=info msg="CreateContainer within sandbox \"b7cc8bc5cd171abfd9aa179353c0522f1de503f186c0d334212c80e16ba8afa9\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4b6397e5033d6d54a56d975d501f8d408d51e13b6ca6b46f4917df41ae73677a\"" Jun 25 19:07:11.602115 containerd[1458]: time="2024-06-25T19:07:11.602091789Z" level=info msg="StartContainer for \"4b6397e5033d6d54a56d975d501f8d408d51e13b6ca6b46f4917df41ae73677a\"" Jun 25 19:07:11.606397 containerd[1458]: time="2024-06-25T19:07:11.606344405Z" level=info msg="CreateContainer within sandbox \"cfd90fd4da3d7a7dfddf274cccb683fa5be1482176cbef682236fb2630435c19\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f352884adefe524ddffe91e92022b1a756c9eca707233e688e4cdd9dd94fd309\"" Jun 25 19:07:11.610643 containerd[1458]: time="2024-06-25T19:07:11.610604675Z" level=info msg="CreateContainer within sandbox \"8e200526ed40ceb8aca759762181c20beede22e231c155456a40cef68a9c69d7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"25a648d8d1379752d34382e4fb8d4b1ffff0118b56fe693cf64fbc26bf581463\"" Jun 25 19:07:11.611889 containerd[1458]: time="2024-06-25T19:07:11.611868856Z" level=info msg="StartContainer for \"f352884adefe524ddffe91e92022b1a756c9eca707233e688e4cdd9dd94fd309\"" Jun 25 19:07:11.612752 containerd[1458]: time="2024-06-25T19:07:11.612670018Z" level=info msg="StartContainer for \"25a648d8d1379752d34382e4fb8d4b1ffff0118b56fe693cf64fbc26bf581463\"" Jun 25 19:07:11.639179 systemd[1]: Started cri-containerd-4b6397e5033d6d54a56d975d501f8d408d51e13b6ca6b46f4917df41ae73677a.scope - libcontainer container 4b6397e5033d6d54a56d975d501f8d408d51e13b6ca6b46f4917df41ae73677a. Jun 25 19:07:11.647921 systemd[1]: Started cri-containerd-f352884adefe524ddffe91e92022b1a756c9eca707233e688e4cdd9dd94fd309.scope - libcontainer container f352884adefe524ddffe91e92022b1a756c9eca707233e688e4cdd9dd94fd309. Jun 25 19:07:11.665883 systemd[1]: Started cri-containerd-25a648d8d1379752d34382e4fb8d4b1ffff0118b56fe693cf64fbc26bf581463.scope - libcontainer container 25a648d8d1379752d34382e4fb8d4b1ffff0118b56fe693cf64fbc26bf581463. Jun 25 19:07:11.745100 containerd[1458]: time="2024-06-25T19:07:11.745052832Z" level=info msg="StartContainer for \"f352884adefe524ddffe91e92022b1a756c9eca707233e688e4cdd9dd94fd309\" returns successfully" Jun 25 19:07:11.745229 containerd[1458]: time="2024-06-25T19:07:11.745204597Z" level=info msg="StartContainer for \"4b6397e5033d6d54a56d975d501f8d408d51e13b6ca6b46f4917df41ae73677a\" returns successfully" Jun 25 19:07:11.745259 containerd[1458]: time="2024-06-25T19:07:11.745232529Z" level=info msg="StartContainer for \"25a648d8d1379752d34382e4fb8d4b1ffff0118b56fe693cf64fbc26bf581463\" returns successfully" Jun 25 19:07:11.792220 kubelet[2178]: E0625 19:07:11.792104 2178 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-4012-0-0-8-d63f105dc7.novalocal.17dc54d479d7c426", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-4012-0-0-8-d63f105dc7.novalocal", UID:"ci-4012-0-0-8-d63f105dc7.novalocal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-4012-0-0-8-d63f105dc7.novalocal"}, FirstTimestamp:time.Date(2024, time.June, 25, 19, 7, 9, 302670374, time.Local), LastTimestamp:time.Date(2024, time.June, 25, 19, 7, 9, 302670374, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ci-4012-0-0-8-d63f105dc7.novalocal"}': 'Post "https://172.24.4.61:6443/api/v1/namespaces/default/events": dial tcp 172.24.4.61:6443: connect: connection refused'(may retry after sleeping) Jun 25 19:07:12.439551 kubelet[2178]: I0625 19:07:12.439526 2178 kubelet_node_status.go:70] "Attempting to register node" node="ci-4012-0-0-8-d63f105dc7.novalocal" Jun 25 19:07:13.939338 kubelet[2178]: E0625 19:07:13.939278 2178 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4012-0-0-8-d63f105dc7.novalocal\" not found" node="ci-4012-0-0-8-d63f105dc7.novalocal" Jun 25 19:07:14.029515 kubelet[2178]: I0625 19:07:14.029458 2178 kubelet_node_status.go:73] "Successfully registered node" node="ci-4012-0-0-8-d63f105dc7.novalocal" Jun 25 19:07:14.293848 kubelet[2178]: I0625 19:07:14.293616 2178 apiserver.go:52] "Watching apiserver" Jun 25 19:07:14.319970 kubelet[2178]: I0625 19:07:14.319898 2178 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jun 25 19:07:14.404465 kubelet[2178]: E0625 19:07:14.403519 2178 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4012-0-0-8-d63f105dc7.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4012-0-0-8-d63f105dc7.novalocal" Jun 25 19:07:17.271845 systemd[1]: Reloading requested from client PID 2468 ('systemctl') (unit session-9.scope)... Jun 25 19:07:17.272139 systemd[1]: Reloading... Jun 25 19:07:17.364826 zram_generator::config[2505]: No configuration found. Jun 25 19:07:17.509025 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 19:07:17.611568 systemd[1]: Reloading finished in 338 ms. Jun 25 19:07:17.649640 kubelet[2178]: I0625 19:07:17.649603 2178 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 19:07:17.650109 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 19:07:17.661962 systemd[1]: kubelet.service: Deactivated successfully. Jun 25 19:07:17.662165 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 19:07:17.662211 systemd[1]: kubelet.service: Consumed 1.830s CPU time, 111.8M memory peak, 0B memory swap peak. Jun 25 19:07:17.668129 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 19:07:17.942792 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 19:07:17.948634 (kubelet)[2569]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 25 19:07:18.190385 kubelet[2569]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 19:07:18.190385 kubelet[2569]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 19:07:18.190385 kubelet[2569]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 19:07:18.191342 kubelet[2569]: I0625 19:07:18.190453 2569 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 19:07:18.195897 kubelet[2569]: I0625 19:07:18.195313 2569 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jun 25 19:07:18.195897 kubelet[2569]: I0625 19:07:18.195342 2569 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 19:07:18.195897 kubelet[2569]: I0625 19:07:18.195532 2569 server.go:895] "Client rotation is on, will bootstrap in background" Jun 25 19:07:18.198565 kubelet[2569]: I0625 19:07:18.197974 2569 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jun 25 19:07:18.199128 kubelet[2569]: I0625 19:07:18.199097 2569 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 19:07:18.211113 kubelet[2569]: I0625 19:07:18.209689 2569 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 19:07:18.211113 kubelet[2569]: I0625 19:07:18.209911 2569 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 19:07:18.211113 kubelet[2569]: I0625 19:07:18.210088 2569 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 19:07:18.211113 kubelet[2569]: I0625 19:07:18.210109 2569 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 19:07:18.211113 kubelet[2569]: I0625 19:07:18.210120 2569 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 19:07:18.211113 kubelet[2569]: I0625 19:07:18.210159 2569 state_mem.go:36] "Initialized new in-memory state store" Jun 25 19:07:18.211421 kubelet[2569]: I0625 19:07:18.210240 2569 kubelet.go:393] "Attempting to sync node with API server" Jun 25 19:07:18.211421 kubelet[2569]: I0625 19:07:18.210255 2569 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 19:07:18.211421 kubelet[2569]: I0625 19:07:18.210276 2569 kubelet.go:309] "Adding apiserver pod source" Jun 25 19:07:18.211421 kubelet[2569]: I0625 19:07:18.210289 2569 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 19:07:18.213552 kubelet[2569]: I0625 19:07:18.213526 2569 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.18" apiVersion="v1" Jun 25 19:07:18.214057 kubelet[2569]: I0625 19:07:18.214036 2569 server.go:1232] "Started kubelet" Jun 25 19:07:18.220756 kubelet[2569]: I0625 19:07:18.218125 2569 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jun 25 19:07:18.220756 kubelet[2569]: I0625 19:07:18.218418 2569 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 19:07:18.220756 kubelet[2569]: I0625 19:07:18.218461 2569 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 19:07:18.220756 kubelet[2569]: I0625 19:07:18.219352 2569 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 19:07:18.220756 kubelet[2569]: I0625 19:07:18.220311 2569 server.go:462] "Adding debug handlers to kubelet server" Jun 25 19:07:18.224939 kubelet[2569]: E0625 19:07:18.224909 2569 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jun 25 19:07:18.224939 kubelet[2569]: E0625 19:07:18.224941 2569 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 25 19:07:18.233793 kubelet[2569]: I0625 19:07:18.231392 2569 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 19:07:18.233793 kubelet[2569]: I0625 19:07:18.231480 2569 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jun 25 19:07:18.233793 kubelet[2569]: I0625 19:07:18.231607 2569 reconciler_new.go:29] "Reconciler: start to sync state" Jun 25 19:07:18.239148 kubelet[2569]: I0625 19:07:18.239121 2569 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 19:07:18.241160 kubelet[2569]: I0625 19:07:18.241139 2569 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 19:07:18.241160 kubelet[2569]: I0625 19:07:18.241162 2569 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 19:07:18.241237 kubelet[2569]: I0625 19:07:18.241180 2569 kubelet.go:2303] "Starting kubelet main sync loop" Jun 25 19:07:18.241237 kubelet[2569]: E0625 19:07:18.241222 2569 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 25 19:07:18.339304 kubelet[2569]: I0625 19:07:18.339200 2569 kubelet_node_status.go:70] "Attempting to register node" node="ci-4012-0-0-8-d63f105dc7.novalocal" Jun 25 19:07:18.341462 kubelet[2569]: E0625 19:07:18.341337 2569 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 25 19:07:18.357490 kubelet[2569]: I0625 19:07:18.357471 2569 kubelet_node_status.go:108] "Node was previously registered" node="ci-4012-0-0-8-d63f105dc7.novalocal" Jun 25 19:07:18.357837 kubelet[2569]: I0625 19:07:18.357628 2569 kubelet_node_status.go:73] "Successfully registered node" node="ci-4012-0-0-8-d63f105dc7.novalocal" Jun 25 19:07:18.358076 kubelet[2569]: I0625 19:07:18.358065 2569 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 19:07:18.358405 kubelet[2569]: I0625 19:07:18.358132 2569 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 19:07:18.358405 kubelet[2569]: I0625 19:07:18.358147 2569 state_mem.go:36] "Initialized new in-memory state store" Jun 25 19:07:18.358405 kubelet[2569]: I0625 19:07:18.358277 2569 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 25 19:07:18.358405 kubelet[2569]: I0625 19:07:18.358297 2569 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 25 19:07:18.358405 kubelet[2569]: I0625 19:07:18.358304 2569 policy_none.go:49] "None policy: Start" Jun 25 19:07:18.360124 kubelet[2569]: I0625 19:07:18.360086 2569 memory_manager.go:169] "Starting memorymanager" policy="None" Jun 25 19:07:18.360124 kubelet[2569]: I0625 19:07:18.360123 2569 state_mem.go:35] "Initializing new in-memory state store" Jun 25 19:07:18.360271 kubelet[2569]: I0625 19:07:18.360244 2569 state_mem.go:75] "Updated machine memory state" Jun 25 19:07:18.368498 kubelet[2569]: I0625 19:07:18.368296 2569 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 19:07:18.370770 kubelet[2569]: I0625 19:07:18.369949 2569 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 19:07:18.541914 kubelet[2569]: I0625 19:07:18.541823 2569 topology_manager.go:215] "Topology Admit Handler" podUID="cd0b8a0658d44ac974a2879a6ba1fcfd" podNamespace="kube-system" podName="kube-apiserver-ci-4012-0-0-8-d63f105dc7.novalocal" Jun 25 19:07:18.542160 kubelet[2569]: I0625 19:07:18.542149 2569 topology_manager.go:215] "Topology Admit Handler" podUID="75a7fa09ff0449ef8304b67d944119df" podNamespace="kube-system" podName="kube-controller-manager-ci-4012-0-0-8-d63f105dc7.novalocal" Jun 25 19:07:18.542269 kubelet[2569]: I0625 19:07:18.542257 2569 topology_manager.go:215] "Topology Admit Handler" podUID="bde3d11a3a8594628e8ef5cc4cd388fd" podNamespace="kube-system" podName="kube-scheduler-ci-4012-0-0-8-d63f105dc7.novalocal" Jun 25 19:07:18.553190 kubelet[2569]: W0625 19:07:18.552906 2569 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 25 19:07:18.553672 kubelet[2569]: W0625 19:07:18.553116 2569 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 25 19:07:18.553672 kubelet[2569]: W0625 19:07:18.553335 2569 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 25 19:07:18.633252 kubelet[2569]: I0625 19:07:18.633084 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bde3d11a3a8594628e8ef5cc4cd388fd-kubeconfig\") pod \"kube-scheduler-ci-4012-0-0-8-d63f105dc7.novalocal\" (UID: \"bde3d11a3a8594628e8ef5cc4cd388fd\") " pod="kube-system/kube-scheduler-ci-4012-0-0-8-d63f105dc7.novalocal" Jun 25 19:07:18.633252 kubelet[2569]: I0625 19:07:18.633177 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cd0b8a0658d44ac974a2879a6ba1fcfd-ca-certs\") pod \"kube-apiserver-ci-4012-0-0-8-d63f105dc7.novalocal\" (UID: \"cd0b8a0658d44ac974a2879a6ba1fcfd\") " pod="kube-system/kube-apiserver-ci-4012-0-0-8-d63f105dc7.novalocal" Jun 25 19:07:18.633985 kubelet[2569]: I0625 19:07:18.633654 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cd0b8a0658d44ac974a2879a6ba1fcfd-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4012-0-0-8-d63f105dc7.novalocal\" (UID: \"cd0b8a0658d44ac974a2879a6ba1fcfd\") " pod="kube-system/kube-apiserver-ci-4012-0-0-8-d63f105dc7.novalocal" Jun 25 19:07:18.633985 kubelet[2569]: I0625 19:07:18.633832 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/75a7fa09ff0449ef8304b67d944119df-flexvolume-dir\") pod \"kube-controller-manager-ci-4012-0-0-8-d63f105dc7.novalocal\" (UID: \"75a7fa09ff0449ef8304b67d944119df\") " pod="kube-system/kube-controller-manager-ci-4012-0-0-8-d63f105dc7.novalocal" Jun 25 19:07:18.634480 kubelet[2569]: I0625 19:07:18.634219 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/75a7fa09ff0449ef8304b67d944119df-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4012-0-0-8-d63f105dc7.novalocal\" (UID: \"75a7fa09ff0449ef8304b67d944119df\") " pod="kube-system/kube-controller-manager-ci-4012-0-0-8-d63f105dc7.novalocal" Jun 25 19:07:18.634480 kubelet[2569]: I0625 19:07:18.634342 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/75a7fa09ff0449ef8304b67d944119df-kubeconfig\") pod \"kube-controller-manager-ci-4012-0-0-8-d63f105dc7.novalocal\" (UID: \"75a7fa09ff0449ef8304b67d944119df\") " pod="kube-system/kube-controller-manager-ci-4012-0-0-8-d63f105dc7.novalocal" Jun 25 19:07:18.634480 kubelet[2569]: I0625 19:07:18.634442 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cd0b8a0658d44ac974a2879a6ba1fcfd-k8s-certs\") pod \"kube-apiserver-ci-4012-0-0-8-d63f105dc7.novalocal\" (UID: \"cd0b8a0658d44ac974a2879a6ba1fcfd\") " pod="kube-system/kube-apiserver-ci-4012-0-0-8-d63f105dc7.novalocal" Jun 25 19:07:18.635141 kubelet[2569]: I0625 19:07:18.634912 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/75a7fa09ff0449ef8304b67d944119df-ca-certs\") pod \"kube-controller-manager-ci-4012-0-0-8-d63f105dc7.novalocal\" (UID: \"75a7fa09ff0449ef8304b67d944119df\") " pod="kube-system/kube-controller-manager-ci-4012-0-0-8-d63f105dc7.novalocal" Jun 25 19:07:18.635141 kubelet[2569]: I0625 19:07:18.635041 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/75a7fa09ff0449ef8304b67d944119df-k8s-certs\") pod \"kube-controller-manager-ci-4012-0-0-8-d63f105dc7.novalocal\" (UID: \"75a7fa09ff0449ef8304b67d944119df\") " pod="kube-system/kube-controller-manager-ci-4012-0-0-8-d63f105dc7.novalocal" Jun 25 19:07:19.213946 kubelet[2569]: I0625 19:07:19.213889 2569 apiserver.go:52] "Watching apiserver" Jun 25 19:07:19.231935 kubelet[2569]: I0625 19:07:19.231865 2569 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jun 25 19:07:19.315379 kubelet[2569]: I0625 19:07:19.315343 2569 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4012-0-0-8-d63f105dc7.novalocal" podStartSLOduration=1.313336476 podCreationTimestamp="2024-06-25 19:07:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 19:07:19.311519258 +0000 UTC m=+1.359428379" watchObservedRunningTime="2024-06-25 19:07:19.313336476 +0000 UTC m=+1.361245607" Jun 25 19:07:19.316209 kubelet[2569]: I0625 19:07:19.316174 2569 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4012-0-0-8-d63f105dc7.novalocal" podStartSLOduration=1.316144147 podCreationTimestamp="2024-06-25 19:07:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 19:07:19.30194903 +0000 UTC m=+1.349858151" watchObservedRunningTime="2024-06-25 19:07:19.316144147 +0000 UTC m=+1.364053278" Jun 25 19:07:19.323095 kubelet[2569]: I0625 19:07:19.322964 2569 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4012-0-0-8-d63f105dc7.novalocal" podStartSLOduration=1.3229274420000001 podCreationTimestamp="2024-06-25 19:07:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 19:07:19.322342528 +0000 UTC m=+1.370251659" watchObservedRunningTime="2024-06-25 19:07:19.322927442 +0000 UTC m=+1.370836563" Jun 25 19:07:25.076386 sudo[1677]: pam_unix(sudo:session): session closed for user root Jun 25 19:07:25.277307 sshd[1674]: pam_unix(sshd:session): session closed for user core Jun 25 19:07:25.281531 systemd-logind[1434]: Session 9 logged out. Waiting for processes to exit. Jun 25 19:07:25.282263 systemd[1]: sshd@6-172.24.4.61:22-172.24.4.1:57996.service: Deactivated successfully. Jun 25 19:07:25.286669 systemd[1]: session-9.scope: Deactivated successfully. Jun 25 19:07:25.286907 systemd[1]: session-9.scope: Consumed 7.467s CPU time, 136.1M memory peak, 0B memory swap peak. Jun 25 19:07:25.290041 systemd-logind[1434]: Removed session 9. Jun 25 19:07:29.914247 kubelet[2569]: I0625 19:07:29.914222 2569 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 25 19:07:29.915181 kubelet[2569]: I0625 19:07:29.914836 2569 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 25 19:07:29.915258 containerd[1458]: time="2024-06-25T19:07:29.914565388Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 25 19:07:30.954596 kubelet[2569]: I0625 19:07:30.954285 2569 topology_manager.go:215] "Topology Admit Handler" podUID="cb9c7ccf-b650-4511-bed8-bb45bfb1bd7b" podNamespace="kube-system" podName="kube-proxy-l7g8d" Jun 25 19:07:30.988479 systemd[1]: Created slice kubepods-besteffort-podcb9c7ccf_b650_4511_bed8_bb45bfb1bd7b.slice - libcontainer container kubepods-besteffort-podcb9c7ccf_b650_4511_bed8_bb45bfb1bd7b.slice. Jun 25 19:07:31.016984 kubelet[2569]: I0625 19:07:31.016958 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cb9c7ccf-b650-4511-bed8-bb45bfb1bd7b-kube-proxy\") pod \"kube-proxy-l7g8d\" (UID: \"cb9c7ccf-b650-4511-bed8-bb45bfb1bd7b\") " pod="kube-system/kube-proxy-l7g8d" Jun 25 19:07:31.017268 kubelet[2569]: I0625 19:07:31.017177 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbxh6\" (UniqueName: \"kubernetes.io/projected/cb9c7ccf-b650-4511-bed8-bb45bfb1bd7b-kube-api-access-wbxh6\") pod \"kube-proxy-l7g8d\" (UID: \"cb9c7ccf-b650-4511-bed8-bb45bfb1bd7b\") " pod="kube-system/kube-proxy-l7g8d" Jun 25 19:07:31.017268 kubelet[2569]: I0625 19:07:31.017208 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cb9c7ccf-b650-4511-bed8-bb45bfb1bd7b-xtables-lock\") pod \"kube-proxy-l7g8d\" (UID: \"cb9c7ccf-b650-4511-bed8-bb45bfb1bd7b\") " pod="kube-system/kube-proxy-l7g8d" Jun 25 19:07:31.017268 kubelet[2569]: I0625 19:07:31.017233 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cb9c7ccf-b650-4511-bed8-bb45bfb1bd7b-lib-modules\") pod \"kube-proxy-l7g8d\" (UID: \"cb9c7ccf-b650-4511-bed8-bb45bfb1bd7b\") " pod="kube-system/kube-proxy-l7g8d" Jun 25 19:07:31.067483 kubelet[2569]: I0625 19:07:31.066929 2569 topology_manager.go:215] "Topology Admit Handler" podUID="27d041ef-0170-4a47-b2f7-b81c41203451" podNamespace="tigera-operator" podName="tigera-operator-76c4974c85-q9f9z" Jun 25 19:07:31.075456 systemd[1]: Created slice kubepods-besteffort-pod27d041ef_0170_4a47_b2f7_b81c41203451.slice - libcontainer container kubepods-besteffort-pod27d041ef_0170_4a47_b2f7_b81c41203451.slice. Jun 25 19:07:31.118447 kubelet[2569]: I0625 19:07:31.118406 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/27d041ef-0170-4a47-b2f7-b81c41203451-var-lib-calico\") pod \"tigera-operator-76c4974c85-q9f9z\" (UID: \"27d041ef-0170-4a47-b2f7-b81c41203451\") " pod="tigera-operator/tigera-operator-76c4974c85-q9f9z" Jun 25 19:07:31.118447 kubelet[2569]: I0625 19:07:31.118449 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vzjc\" (UniqueName: \"kubernetes.io/projected/27d041ef-0170-4a47-b2f7-b81c41203451-kube-api-access-7vzjc\") pod \"tigera-operator-76c4974c85-q9f9z\" (UID: \"27d041ef-0170-4a47-b2f7-b81c41203451\") " pod="tigera-operator/tigera-operator-76c4974c85-q9f9z" Jun 25 19:07:31.304939 containerd[1458]: time="2024-06-25T19:07:31.304646773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-l7g8d,Uid:cb9c7ccf-b650-4511-bed8-bb45bfb1bd7b,Namespace:kube-system,Attempt:0,}" Jun 25 19:07:31.349766 containerd[1458]: time="2024-06-25T19:07:31.349206835Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 19:07:31.349766 containerd[1458]: time="2024-06-25T19:07:31.349281985Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 19:07:31.349766 containerd[1458]: time="2024-06-25T19:07:31.349313684Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 19:07:31.349766 containerd[1458]: time="2024-06-25T19:07:31.349332750Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 19:07:31.381255 containerd[1458]: time="2024-06-25T19:07:31.381207723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-q9f9z,Uid:27d041ef-0170-4a47-b2f7-b81c41203451,Namespace:tigera-operator,Attempt:0,}" Jun 25 19:07:31.384385 systemd[1]: Started cri-containerd-b2cefefc709918bbfb32378a5d1582bd7c59faf783dc6a9faf1f1a29447315e2.scope - libcontainer container b2cefefc709918bbfb32378a5d1582bd7c59faf783dc6a9faf1f1a29447315e2. Jun 25 19:07:31.412043 containerd[1458]: time="2024-06-25T19:07:31.411924575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-l7g8d,Uid:cb9c7ccf-b650-4511-bed8-bb45bfb1bd7b,Namespace:kube-system,Attempt:0,} returns sandbox id \"b2cefefc709918bbfb32378a5d1582bd7c59faf783dc6a9faf1f1a29447315e2\"" Jun 25 19:07:31.417018 containerd[1458]: time="2024-06-25T19:07:31.416985966Z" level=info msg="CreateContainer within sandbox \"b2cefefc709918bbfb32378a5d1582bd7c59faf783dc6a9faf1f1a29447315e2\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 25 19:07:31.431379 containerd[1458]: time="2024-06-25T19:07:31.431118587Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 19:07:31.431379 containerd[1458]: time="2024-06-25T19:07:31.431185411Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 19:07:31.431379 containerd[1458]: time="2024-06-25T19:07:31.431228201Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 19:07:31.431379 containerd[1458]: time="2024-06-25T19:07:31.431248129Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 19:07:31.443572 containerd[1458]: time="2024-06-25T19:07:31.443523391Z" level=info msg="CreateContainer within sandbox \"b2cefefc709918bbfb32378a5d1582bd7c59faf783dc6a9faf1f1a29447315e2\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"07a3b38b52f16f0c956bd4975d3fe5546e9635cd36464d3a95726ec08e9e794e\"" Jun 25 19:07:31.448131 containerd[1458]: time="2024-06-25T19:07:31.445636668Z" level=info msg="StartContainer for \"07a3b38b52f16f0c956bd4975d3fe5546e9635cd36464d3a95726ec08e9e794e\"" Jun 25 19:07:31.452241 systemd[1]: Started cri-containerd-b128f2736141baedcdc754155cf1451310f05d48bfdc6cb9541c8e3717de2862.scope - libcontainer container b128f2736141baedcdc754155cf1451310f05d48bfdc6cb9541c8e3717de2862. Jun 25 19:07:31.484154 systemd[1]: Started cri-containerd-07a3b38b52f16f0c956bd4975d3fe5546e9635cd36464d3a95726ec08e9e794e.scope - libcontainer container 07a3b38b52f16f0c956bd4975d3fe5546e9635cd36464d3a95726ec08e9e794e. Jun 25 19:07:31.513442 containerd[1458]: time="2024-06-25T19:07:31.513389371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-q9f9z,Uid:27d041ef-0170-4a47-b2f7-b81c41203451,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"b128f2736141baedcdc754155cf1451310f05d48bfdc6cb9541c8e3717de2862\"" Jun 25 19:07:31.516466 containerd[1458]: time="2024-06-25T19:07:31.516271418Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\"" Jun 25 19:07:31.535952 containerd[1458]: time="2024-06-25T19:07:31.535906174Z" level=info msg="StartContainer for \"07a3b38b52f16f0c956bd4975d3fe5546e9635cd36464d3a95726ec08e9e794e\" returns successfully" Jun 25 19:07:33.157139 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1791887659.mount: Deactivated successfully. Jun 25 19:07:33.919399 containerd[1458]: time="2024-06-25T19:07:33.919339938Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 19:07:33.921298 containerd[1458]: time="2024-06-25T19:07:33.921046274Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=22076080" Jun 25 19:07:33.922550 containerd[1458]: time="2024-06-25T19:07:33.922518883Z" level=info msg="ImageCreate event name:\"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 19:07:33.925597 containerd[1458]: time="2024-06-25T19:07:33.925546305Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 19:07:33.926991 containerd[1458]: time="2024-06-25T19:07:33.926392550Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.0\" with image id \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\", repo tag \"quay.io/tigera/operator:v1.34.0\", repo digest \"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\", size \"22070263\" in 2.410045059s" Jun 25 19:07:33.926991 containerd[1458]: time="2024-06-25T19:07:33.926427526Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\"" Jun 25 19:07:33.928139 containerd[1458]: time="2024-06-25T19:07:33.928113143Z" level=info msg="CreateContainer within sandbox \"b128f2736141baedcdc754155cf1451310f05d48bfdc6cb9541c8e3717de2862\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jun 25 19:07:33.961020 containerd[1458]: time="2024-06-25T19:07:33.960976296Z" level=info msg="CreateContainer within sandbox \"b128f2736141baedcdc754155cf1451310f05d48bfdc6cb9541c8e3717de2862\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"30ae0ee75faf8c29acb702f820089e7c6948cece7adc757cffa750583d2bc39f\"" Jun 25 19:07:33.961776 containerd[1458]: time="2024-06-25T19:07:33.961580258Z" level=info msg="StartContainer for \"30ae0ee75faf8c29acb702f820089e7c6948cece7adc757cffa750583d2bc39f\"" Jun 25 19:07:33.991886 systemd[1]: Started cri-containerd-30ae0ee75faf8c29acb702f820089e7c6948cece7adc757cffa750583d2bc39f.scope - libcontainer container 30ae0ee75faf8c29acb702f820089e7c6948cece7adc757cffa750583d2bc39f. Jun 25 19:07:34.019801 containerd[1458]: time="2024-06-25T19:07:34.019764157Z" level=info msg="StartContainer for \"30ae0ee75faf8c29acb702f820089e7c6948cece7adc757cffa750583d2bc39f\" returns successfully" Jun 25 19:07:34.381777 kubelet[2569]: I0625 19:07:34.380973 2569 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-l7g8d" podStartSLOduration=4.37870788 podCreationTimestamp="2024-06-25 19:07:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 19:07:32.38558833 +0000 UTC m=+14.433497511" watchObservedRunningTime="2024-06-25 19:07:34.37870788 +0000 UTC m=+16.426617051" Jun 25 19:07:34.381777 kubelet[2569]: I0625 19:07:34.381259 2569 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4974c85-q9f9z" podStartSLOduration=0.970102088 podCreationTimestamp="2024-06-25 19:07:31 +0000 UTC" firstStartedPulling="2024-06-25 19:07:31.515622954 +0000 UTC m=+13.563532075" lastFinishedPulling="2024-06-25 19:07:33.926708973 +0000 UTC m=+15.974618104" observedRunningTime="2024-06-25 19:07:34.378011616 +0000 UTC m=+16.425920748" watchObservedRunningTime="2024-06-25 19:07:34.381188117 +0000 UTC m=+16.429097308" Jun 25 19:07:37.334265 kubelet[2569]: I0625 19:07:37.334162 2569 topology_manager.go:215] "Topology Admit Handler" podUID="a6801a0c-d3f5-459e-ae12-55e71353346e" podNamespace="calico-system" podName="calico-typha-844cf8c4db-jqkc8" Jun 25 19:07:37.344610 systemd[1]: Created slice kubepods-besteffort-poda6801a0c_d3f5_459e_ae12_55e71353346e.slice - libcontainer container kubepods-besteffort-poda6801a0c_d3f5_459e_ae12_55e71353346e.slice. Jun 25 19:07:37.427616 kubelet[2569]: I0625 19:07:37.426822 2569 topology_manager.go:215] "Topology Admit Handler" podUID="2f0d61c1-2eaf-4ad1-a143-59df76ba046c" podNamespace="calico-system" podName="calico-node-v26gh" Jun 25 19:07:37.437270 systemd[1]: Created slice kubepods-besteffort-pod2f0d61c1_2eaf_4ad1_a143_59df76ba046c.slice - libcontainer container kubepods-besteffort-pod2f0d61c1_2eaf_4ad1_a143_59df76ba046c.slice. Jun 25 19:07:37.458883 kubelet[2569]: I0625 19:07:37.458814 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2f0d61c1-2eaf-4ad1-a143-59df76ba046c-lib-modules\") pod \"calico-node-v26gh\" (UID: \"2f0d61c1-2eaf-4ad1-a143-59df76ba046c\") " pod="calico-system/calico-node-v26gh" Jun 25 19:07:37.459226 kubelet[2569]: I0625 19:07:37.458863 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/2f0d61c1-2eaf-4ad1-a143-59df76ba046c-var-run-calico\") pod \"calico-node-v26gh\" (UID: \"2f0d61c1-2eaf-4ad1-a143-59df76ba046c\") " pod="calico-system/calico-node-v26gh" Jun 25 19:07:37.459226 kubelet[2569]: I0625 19:07:37.459097 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a6801a0c-d3f5-459e-ae12-55e71353346e-tigera-ca-bundle\") pod \"calico-typha-844cf8c4db-jqkc8\" (UID: \"a6801a0c-d3f5-459e-ae12-55e71353346e\") " pod="calico-system/calico-typha-844cf8c4db-jqkc8" Jun 25 19:07:37.459226 kubelet[2569]: I0625 19:07:37.459181 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/a6801a0c-d3f5-459e-ae12-55e71353346e-typha-certs\") pod \"calico-typha-844cf8c4db-jqkc8\" (UID: \"a6801a0c-d3f5-459e-ae12-55e71353346e\") " pod="calico-system/calico-typha-844cf8c4db-jqkc8" Jun 25 19:07:37.459608 kubelet[2569]: I0625 19:07:37.459208 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/2f0d61c1-2eaf-4ad1-a143-59df76ba046c-policysync\") pod \"calico-node-v26gh\" (UID: \"2f0d61c1-2eaf-4ad1-a143-59df76ba046c\") " pod="calico-system/calico-node-v26gh" Jun 25 19:07:37.459608 kubelet[2569]: I0625 19:07:37.459479 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/2f0d61c1-2eaf-4ad1-a143-59df76ba046c-flexvol-driver-host\") pod \"calico-node-v26gh\" (UID: \"2f0d61c1-2eaf-4ad1-a143-59df76ba046c\") " pod="calico-system/calico-node-v26gh" Jun 25 19:07:37.459608 kubelet[2569]: I0625 19:07:37.459547 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/2f0d61c1-2eaf-4ad1-a143-59df76ba046c-node-certs\") pod \"calico-node-v26gh\" (UID: \"2f0d61c1-2eaf-4ad1-a143-59df76ba046c\") " pod="calico-system/calico-node-v26gh" Jun 25 19:07:37.459608 kubelet[2569]: I0625 19:07:37.459575 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9knj\" (UniqueName: \"kubernetes.io/projected/2f0d61c1-2eaf-4ad1-a143-59df76ba046c-kube-api-access-z9knj\") pod \"calico-node-v26gh\" (UID: \"2f0d61c1-2eaf-4ad1-a143-59df76ba046c\") " pod="calico-system/calico-node-v26gh" Jun 25 19:07:37.460149 kubelet[2569]: I0625 19:07:37.459821 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6lrr\" (UniqueName: \"kubernetes.io/projected/a6801a0c-d3f5-459e-ae12-55e71353346e-kube-api-access-c6lrr\") pod \"calico-typha-844cf8c4db-jqkc8\" (UID: \"a6801a0c-d3f5-459e-ae12-55e71353346e\") " pod="calico-system/calico-typha-844cf8c4db-jqkc8" Jun 25 19:07:37.460149 kubelet[2569]: I0625 19:07:37.459866 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/2f0d61c1-2eaf-4ad1-a143-59df76ba046c-cni-log-dir\") pod \"calico-node-v26gh\" (UID: \"2f0d61c1-2eaf-4ad1-a143-59df76ba046c\") " pod="calico-system/calico-node-v26gh" Jun 25 19:07:37.460149 kubelet[2569]: I0625 19:07:37.459891 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2f0d61c1-2eaf-4ad1-a143-59df76ba046c-xtables-lock\") pod \"calico-node-v26gh\" (UID: \"2f0d61c1-2eaf-4ad1-a143-59df76ba046c\") " pod="calico-system/calico-node-v26gh" Jun 25 19:07:37.460149 kubelet[2569]: I0625 19:07:37.459915 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f0d61c1-2eaf-4ad1-a143-59df76ba046c-tigera-ca-bundle\") pod \"calico-node-v26gh\" (UID: \"2f0d61c1-2eaf-4ad1-a143-59df76ba046c\") " pod="calico-system/calico-node-v26gh" Jun 25 19:07:37.460149 kubelet[2569]: I0625 19:07:37.459941 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/2f0d61c1-2eaf-4ad1-a143-59df76ba046c-cni-net-dir\") pod \"calico-node-v26gh\" (UID: \"2f0d61c1-2eaf-4ad1-a143-59df76ba046c\") " pod="calico-system/calico-node-v26gh" Jun 25 19:07:37.460299 kubelet[2569]: I0625 19:07:37.459963 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/2f0d61c1-2eaf-4ad1-a143-59df76ba046c-cni-bin-dir\") pod \"calico-node-v26gh\" (UID: \"2f0d61c1-2eaf-4ad1-a143-59df76ba046c\") " pod="calico-system/calico-node-v26gh" Jun 25 19:07:37.460299 kubelet[2569]: I0625 19:07:37.459987 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2f0d61c1-2eaf-4ad1-a143-59df76ba046c-var-lib-calico\") pod \"calico-node-v26gh\" (UID: \"2f0d61c1-2eaf-4ad1-a143-59df76ba046c\") " pod="calico-system/calico-node-v26gh" Jun 25 19:07:37.543251 kubelet[2569]: I0625 19:07:37.543223 2569 topology_manager.go:215] "Topology Admit Handler" podUID="1276367b-2bea-4184-b5ed-849c23171592" podNamespace="calico-system" podName="csi-node-driver-r7zkp" Jun 25 19:07:37.544492 kubelet[2569]: E0625 19:07:37.544303 2569 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r7zkp" podUID="1276367b-2bea-4184-b5ed-849c23171592" Jun 25 19:07:37.560653 kubelet[2569]: I0625 19:07:37.560395 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1276367b-2bea-4184-b5ed-849c23171592-registration-dir\") pod \"csi-node-driver-r7zkp\" (UID: \"1276367b-2bea-4184-b5ed-849c23171592\") " pod="calico-system/csi-node-driver-r7zkp" Jun 25 19:07:37.562447 kubelet[2569]: I0625 19:07:37.561433 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/1276367b-2bea-4184-b5ed-849c23171592-varrun\") pod \"csi-node-driver-r7zkp\" (UID: \"1276367b-2bea-4184-b5ed-849c23171592\") " pod="calico-system/csi-node-driver-r7zkp" Jun 25 19:07:37.562447 kubelet[2569]: I0625 19:07:37.561498 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1276367b-2bea-4184-b5ed-849c23171592-kubelet-dir\") pod \"csi-node-driver-r7zkp\" (UID: \"1276367b-2bea-4184-b5ed-849c23171592\") " pod="calico-system/csi-node-driver-r7zkp" Jun 25 19:07:37.562447 kubelet[2569]: I0625 19:07:37.561527 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spxqf\" (UniqueName: \"kubernetes.io/projected/1276367b-2bea-4184-b5ed-849c23171592-kube-api-access-spxqf\") pod \"csi-node-driver-r7zkp\" (UID: \"1276367b-2bea-4184-b5ed-849c23171592\") " pod="calico-system/csi-node-driver-r7zkp" Jun 25 19:07:37.562447 kubelet[2569]: I0625 19:07:37.561573 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1276367b-2bea-4184-b5ed-849c23171592-socket-dir\") pod \"csi-node-driver-r7zkp\" (UID: \"1276367b-2bea-4184-b5ed-849c23171592\") " pod="calico-system/csi-node-driver-r7zkp" Jun 25 19:07:37.566548 kubelet[2569]: E0625 19:07:37.566510 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:37.566548 kubelet[2569]: W0625 19:07:37.566566 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:37.566548 kubelet[2569]: E0625 19:07:37.566595 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:37.571134 kubelet[2569]: E0625 19:07:37.570959 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:37.571134 kubelet[2569]: W0625 19:07:37.570985 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:37.571134 kubelet[2569]: E0625 19:07:37.571013 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:37.573689 kubelet[2569]: E0625 19:07:37.573668 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:37.573689 kubelet[2569]: W0625 19:07:37.573685 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:37.573914 kubelet[2569]: E0625 19:07:37.573708 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:37.574073 kubelet[2569]: E0625 19:07:37.574035 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:37.574073 kubelet[2569]: W0625 19:07:37.574061 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:37.574149 kubelet[2569]: E0625 19:07:37.574076 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:37.574819 kubelet[2569]: E0625 19:07:37.574708 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:37.576761 kubelet[2569]: W0625 19:07:37.574725 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:37.577113 kubelet[2569]: E0625 19:07:37.577071 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:37.579749 kubelet[2569]: E0625 19:07:37.579712 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:37.579841 kubelet[2569]: W0625 19:07:37.579728 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:37.580043 kubelet[2569]: E0625 19:07:37.580023 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:37.580858 kubelet[2569]: E0625 19:07:37.580817 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:37.580858 kubelet[2569]: W0625 19:07:37.580855 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:37.581609 kubelet[2569]: E0625 19:07:37.580921 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:37.581609 kubelet[2569]: E0625 19:07:37.581247 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:37.581609 kubelet[2569]: W0625 19:07:37.581257 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:37.581609 kubelet[2569]: E0625 19:07:37.581583 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:37.581609 kubelet[2569]: W0625 19:07:37.581590 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:37.581751 kubelet[2569]: E0625 19:07:37.581664 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:37.581751 kubelet[2569]: E0625 19:07:37.581684 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:37.582255 kubelet[2569]: E0625 19:07:37.581863 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:37.582255 kubelet[2569]: W0625 19:07:37.581876 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:37.582255 kubelet[2569]: E0625 19:07:37.581934 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:37.582255 kubelet[2569]: E0625 19:07:37.582071 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:37.582255 kubelet[2569]: W0625 19:07:37.582103 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:37.582255 kubelet[2569]: E0625 19:07:37.582162 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:37.582519 kubelet[2569]: E0625 19:07:37.582260 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:37.582519 kubelet[2569]: W0625 19:07:37.582292 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:37.582519 kubelet[2569]: E0625 19:07:37.582402 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:37.582519 kubelet[2569]: E0625 19:07:37.582515 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:37.582682 kubelet[2569]: W0625 19:07:37.582523 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:37.582682 kubelet[2569]: E0625 19:07:37.582556 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:37.582797 kubelet[2569]: E0625 19:07:37.582709 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:37.582797 kubelet[2569]: W0625 19:07:37.582718 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:37.582797 kubelet[2569]: E0625 19:07:37.582768 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:37.583203 kubelet[2569]: E0625 19:07:37.582919 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:37.583203 kubelet[2569]: W0625 19:07:37.582927 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:37.583203 kubelet[2569]: E0625 19:07:37.582963 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:37.583203 kubelet[2569]: E0625 19:07:37.583103 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:37.583203 kubelet[2569]: W0625 19:07:37.583111 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:37.583203 kubelet[2569]: E0625 19:07:37.583157 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:37.583762 kubelet[2569]: E0625 19:07:37.583363 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:37.583762 kubelet[2569]: W0625 19:07:37.583376 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:37.583762 kubelet[2569]: E0625 19:07:37.583402 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:37.584067 kubelet[2569]: E0625 19:07:37.584050 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:37.584067 kubelet[2569]: W0625 19:07:37.584063 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:37.584174 kubelet[2569]: E0625 19:07:37.584078 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:37.584709 kubelet[2569]: E0625 19:07:37.584521 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:37.584709 kubelet[2569]: W0625 19:07:37.584535 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:37.584815 kubelet[2569]: E0625 19:07:37.584793 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:37.586068 kubelet[2569]: E0625 19:07:37.586026 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:37.586068 kubelet[2569]: W0625 19:07:37.586040 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:37.586247 kubelet[2569]: E0625 19:07:37.586178 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:37.589169 kubelet[2569]: E0625 19:07:37.589043 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:37.589169 kubelet[2569]: W0625 19:07:37.589074 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:37.589169 kubelet[2569]: E0625 19:07:37.589158 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:37.589861 kubelet[2569]: E0625 19:07:37.589850 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:37.590074 kubelet[2569]: W0625 19:07:37.589933 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:37.590074 kubelet[2569]: E0625 19:07:37.589972 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:37.590259 kubelet[2569]: E0625 19:07:37.590211 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:37.590849 kubelet[2569]: W0625 19:07:37.590771 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:37.590849 kubelet[2569]: E0625 19:07:37.590810 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:37.593926 kubelet[2569]: E0625 19:07:37.593815 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:37.593926 kubelet[2569]: W0625 19:07:37.593832 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:37.593926 kubelet[2569]: E0625 19:07:37.593900 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:37.594308 kubelet[2569]: E0625 19:07:37.594180 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:37.594308 kubelet[2569]: W0625 19:07:37.594191 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:37.594308 kubelet[2569]: E0625 19:07:37.594208 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:37.594490 kubelet[2569]: E0625 19:07:37.594480 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:37.594556 kubelet[2569]: W0625 19:07:37.594546 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:37.594637 kubelet[2569]: E0625 19:07:37.594607 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:37.609222 kubelet[2569]: E0625 19:07:37.609146 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:37.609222 kubelet[2569]: W0625 19:07:37.609163 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:37.609222 kubelet[2569]: E0625 19:07:37.609183 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:37.615287 kubelet[2569]: E0625 19:07:37.615260 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:37.615435 kubelet[2569]: W0625 19:07:37.615292 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:37.615435 kubelet[2569]: E0625 19:07:37.615314 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:37.652483 containerd[1458]: time="2024-06-25T19:07:37.652041550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-844cf8c4db-jqkc8,Uid:a6801a0c-d3f5-459e-ae12-55e71353346e,Namespace:calico-system,Attempt:0,}" Jun 25 19:07:37.662986 kubelet[2569]: E0625 19:07:37.662824 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:37.662986 kubelet[2569]: W0625 19:07:37.662863 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:37.662986 kubelet[2569]: E0625 19:07:37.662881 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:37.663545 kubelet[2569]: E0625 19:07:37.663491 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:37.663545 kubelet[2569]: W0625 19:07:37.663503 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:37.663697 kubelet[2569]: E0625 19:07:37.663654 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:37.663945 kubelet[2569]: E0625 19:07:37.663895 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:37.664011 kubelet[2569]: W0625 19:07:37.663945 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:37.664011 kubelet[2569]: E0625 19:07:37.663977 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:37.664227 kubelet[2569]: E0625 19:07:37.664211 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:37.664227 kubelet[2569]: W0625 19:07:37.664224 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:37.664331 kubelet[2569]: E0625 19:07:37.664244 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:37.664449 kubelet[2569]: E0625 19:07:37.664430 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:37.664449 kubelet[2569]: W0625 19:07:37.664443 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:37.664677 kubelet[2569]: E0625 19:07:37.664461 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:37.664885 kubelet[2569]: E0625 19:07:37.664875 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:37.665253 kubelet[2569]: W0625 19:07:37.664934 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:37.665991 kubelet[2569]: E0625 19:07:37.665590 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:37.665991 kubelet[2569]: E0625 19:07:37.665794 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:37.665991 kubelet[2569]: W0625 19:07:37.665805 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:37.665991 kubelet[2569]: E0625 19:07:37.665820 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:37.665991 kubelet[2569]: E0625 19:07:37.665997 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:37.665991 kubelet[2569]: W0625 19:07:37.666006 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:37.669516 kubelet[2569]: E0625 19:07:37.666023 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:37.669516 kubelet[2569]: E0625 19:07:37.666159 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:37.669516 kubelet[2569]: W0625 19:07:37.666169 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:37.669516 kubelet[2569]: E0625 19:07:37.666180 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:37.669516 kubelet[2569]: E0625 19:07:37.666357 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:37.669516 kubelet[2569]: W0625 19:07:37.666368 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:37.669516 kubelet[2569]: E0625 19:07:37.666381 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:37.671304 kubelet[2569]: E0625 19:07:37.670064 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:37.671304 kubelet[2569]: W0625 19:07:37.670076 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:37.671304 kubelet[2569]: E0625 19:07:37.670097 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:37.671304 kubelet[2569]: E0625 19:07:37.670852 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:37.671304 kubelet[2569]: W0625 19:07:37.670872 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:37.671304 kubelet[2569]: E0625 19:07:37.671017 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:37.676824 kubelet[2569]: E0625 19:07:37.672923 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:37.676824 kubelet[2569]: W0625 19:07:37.672938 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:37.676824 kubelet[2569]: E0625 19:07:37.673101 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:37.676824 kubelet[2569]: W0625 19:07:37.673109 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:37.676824 kubelet[2569]: E0625 19:07:37.673227 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:37.676824 kubelet[2569]: W0625 19:07:37.673234 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:37.676824 kubelet[2569]: E0625 19:07:37.673348 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:37.676824 kubelet[2569]: W0625 19:07:37.673356 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:37.676824 kubelet[2569]: E0625 19:07:37.673369 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:37.676824 kubelet[2569]: E0625 19:07:37.673488 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:37.677220 kubelet[2569]: W0625 19:07:37.673496 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:37.677220 kubelet[2569]: E0625 19:07:37.673507 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:37.677220 kubelet[2569]: E0625 19:07:37.673627 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:37.677220 kubelet[2569]: W0625 19:07:37.673634 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:37.677220 kubelet[2569]: E0625 19:07:37.673670 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:37.677220 kubelet[2569]: E0625 19:07:37.673907 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:37.677220 kubelet[2569]: W0625 19:07:37.673915 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:37.677220 kubelet[2569]: E0625 19:07:37.673927 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:37.677220 kubelet[2569]: E0625 19:07:37.673947 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:37.677220 kubelet[2569]: E0625 19:07:37.674066 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:37.677506 kubelet[2569]: W0625 19:07:37.674073 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:37.677506 kubelet[2569]: E0625 19:07:37.674084 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:37.677506 kubelet[2569]: E0625 19:07:37.674199 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:37.677506 kubelet[2569]: W0625 19:07:37.674207 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:37.677506 kubelet[2569]: E0625 19:07:37.674217 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:37.677506 kubelet[2569]: E0625 19:07:37.674344 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:37.677506 kubelet[2569]: W0625 19:07:37.674352 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:37.677506 kubelet[2569]: E0625 19:07:37.674362 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:37.677506 kubelet[2569]: E0625 19:07:37.675294 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:37.677506 kubelet[2569]: E0625 19:07:37.675317 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:37.679353 kubelet[2569]: E0625 19:07:37.675448 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:37.679353 kubelet[2569]: W0625 19:07:37.675455 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:37.679353 kubelet[2569]: E0625 19:07:37.675466 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:37.679353 kubelet[2569]: E0625 19:07:37.675629 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:37.679353 kubelet[2569]: W0625 19:07:37.675637 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:37.679353 kubelet[2569]: E0625 19:07:37.675649 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:37.679353 kubelet[2569]: E0625 19:07:37.676626 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:37.679353 kubelet[2569]: W0625 19:07:37.676636 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:37.679353 kubelet[2569]: E0625 19:07:37.676649 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:37.688011 kubelet[2569]: E0625 19:07:37.687783 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:37.688011 kubelet[2569]: W0625 19:07:37.687809 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:37.688011 kubelet[2569]: E0625 19:07:37.687839 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:37.700094 containerd[1458]: time="2024-06-25T19:07:37.699896351Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 19:07:37.700094 containerd[1458]: time="2024-06-25T19:07:37.699957065Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 19:07:37.700813 containerd[1458]: time="2024-06-25T19:07:37.700776600Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 19:07:37.701019 containerd[1458]: time="2024-06-25T19:07:37.700878982Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 19:07:37.721937 systemd[1]: Started cri-containerd-c8fd43baa4dcadd4fc2ab4620969d8fdf98e7e6c4d31ea84d768fa2587290eb4.scope - libcontainer container c8fd43baa4dcadd4fc2ab4620969d8fdf98e7e6c4d31ea84d768fa2587290eb4. Jun 25 19:07:37.744774 containerd[1458]: time="2024-06-25T19:07:37.744068160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-v26gh,Uid:2f0d61c1-2eaf-4ad1-a143-59df76ba046c,Namespace:calico-system,Attempt:0,}" Jun 25 19:07:37.781152 containerd[1458]: time="2024-06-25T19:07:37.781109699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-844cf8c4db-jqkc8,Uid:a6801a0c-d3f5-459e-ae12-55e71353346e,Namespace:calico-system,Attempt:0,} returns sandbox id \"c8fd43baa4dcadd4fc2ab4620969d8fdf98e7e6c4d31ea84d768fa2587290eb4\"" Jun 25 19:07:37.784063 containerd[1458]: time="2024-06-25T19:07:37.784039920Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\"" Jun 25 19:07:37.794520 containerd[1458]: time="2024-06-25T19:07:37.794041120Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 19:07:37.794520 containerd[1458]: time="2024-06-25T19:07:37.794136038Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 19:07:37.794660 containerd[1458]: time="2024-06-25T19:07:37.794177756Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 19:07:37.794990 containerd[1458]: time="2024-06-25T19:07:37.794851308Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 19:07:37.816389 systemd[1]: Started cri-containerd-8e7bd251aa5fd8f9fbf8e355e4a1dc2e66c2015d26fae3df75679aff56de64ba.scope - libcontainer container 8e7bd251aa5fd8f9fbf8e355e4a1dc2e66c2015d26fae3df75679aff56de64ba. Jun 25 19:07:37.859533 containerd[1458]: time="2024-06-25T19:07:37.858019073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-v26gh,Uid:2f0d61c1-2eaf-4ad1-a143-59df76ba046c,Namespace:calico-system,Attempt:0,} returns sandbox id \"8e7bd251aa5fd8f9fbf8e355e4a1dc2e66c2015d26fae3df75679aff56de64ba\"" Jun 25 19:07:39.338879 kubelet[2569]: E0625 19:07:39.338773 2569 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r7zkp" podUID="1276367b-2bea-4184-b5ed-849c23171592" Jun 25 19:07:40.999450 containerd[1458]: time="2024-06-25T19:07:40.999395655Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 19:07:41.000983 containerd[1458]: time="2024-06-25T19:07:41.000888373Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.0: active requests=0, bytes read=29458030" Jun 25 19:07:41.002143 containerd[1458]: time="2024-06-25T19:07:41.002053317Z" level=info msg="ImageCreate event name:\"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 19:07:41.005878 containerd[1458]: time="2024-06-25T19:07:41.005824585Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 19:07:41.007937 containerd[1458]: time="2024-06-25T19:07:41.007775982Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.0\" with image id \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\", size \"30905782\" in 3.223050207s" Jun 25 19:07:41.007937 containerd[1458]: time="2024-06-25T19:07:41.007821547Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\"" Jun 25 19:07:41.010413 containerd[1458]: time="2024-06-25T19:07:41.010228308Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Jun 25 19:07:41.038437 containerd[1458]: time="2024-06-25T19:07:41.038363238Z" level=info msg="CreateContainer within sandbox \"c8fd43baa4dcadd4fc2ab4620969d8fdf98e7e6c4d31ea84d768fa2587290eb4\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jun 25 19:07:41.058907 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1432507396.mount: Deactivated successfully. Jun 25 19:07:41.065684 containerd[1458]: time="2024-06-25T19:07:41.065585519Z" level=info msg="CreateContainer within sandbox \"c8fd43baa4dcadd4fc2ab4620969d8fdf98e7e6c4d31ea84d768fa2587290eb4\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"8d434e2c5a81070190867b94a92bd118e1900f8a8bda6dcbdf7a26e526504684\"" Jun 25 19:07:41.066785 containerd[1458]: time="2024-06-25T19:07:41.066238182Z" level=info msg="StartContainer for \"8d434e2c5a81070190867b94a92bd118e1900f8a8bda6dcbdf7a26e526504684\"" Jun 25 19:07:41.111942 systemd[1]: Started cri-containerd-8d434e2c5a81070190867b94a92bd118e1900f8a8bda6dcbdf7a26e526504684.scope - libcontainer container 8d434e2c5a81070190867b94a92bd118e1900f8a8bda6dcbdf7a26e526504684. Jun 25 19:07:41.192903 containerd[1458]: time="2024-06-25T19:07:41.192859845Z" level=info msg="StartContainer for \"8d434e2c5a81070190867b94a92bd118e1900f8a8bda6dcbdf7a26e526504684\" returns successfully" Jun 25 19:07:41.241777 kubelet[2569]: E0625 19:07:41.241446 2569 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r7zkp" podUID="1276367b-2bea-4184-b5ed-849c23171592" Jun 25 19:07:41.406148 containerd[1458]: time="2024-06-25T19:07:41.405923308Z" level=info msg="StopContainer for \"8d434e2c5a81070190867b94a92bd118e1900f8a8bda6dcbdf7a26e526504684\" with timeout 300 (s)" Jun 25 19:07:41.407229 containerd[1458]: time="2024-06-25T19:07:41.407134067Z" level=info msg="Stop container \"8d434e2c5a81070190867b94a92bd118e1900f8a8bda6dcbdf7a26e526504684\" with signal terminated" Jun 25 19:07:41.424845 systemd[1]: cri-containerd-8d434e2c5a81070190867b94a92bd118e1900f8a8bda6dcbdf7a26e526504684.scope: Deactivated successfully. Jun 25 19:07:41.438748 kubelet[2569]: I0625 19:07:41.438516 2569 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-844cf8c4db-jqkc8" podStartSLOduration=1.212013576 podCreationTimestamp="2024-06-25 19:07:37 +0000 UTC" firstStartedPulling="2024-06-25 19:07:37.783403116 +0000 UTC m=+19.831312247" lastFinishedPulling="2024-06-25 19:07:41.009864776 +0000 UTC m=+23.057773907" observedRunningTime="2024-06-25 19:07:41.436134179 +0000 UTC m=+23.484043300" watchObservedRunningTime="2024-06-25 19:07:41.438475236 +0000 UTC m=+23.486384397" Jun 25 19:07:41.819354 containerd[1458]: time="2024-06-25T19:07:41.819198336Z" level=info msg="shim disconnected" id=8d434e2c5a81070190867b94a92bd118e1900f8a8bda6dcbdf7a26e526504684 namespace=k8s.io Jun 25 19:07:41.819354 containerd[1458]: time="2024-06-25T19:07:41.819330834Z" level=warning msg="cleaning up after shim disconnected" id=8d434e2c5a81070190867b94a92bd118e1900f8a8bda6dcbdf7a26e526504684 namespace=k8s.io Jun 25 19:07:41.819354 containerd[1458]: time="2024-06-25T19:07:41.819358896Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 19:07:41.866079 containerd[1458]: time="2024-06-25T19:07:41.865950259Z" level=info msg="StopContainer for \"8d434e2c5a81070190867b94a92bd118e1900f8a8bda6dcbdf7a26e526504684\" returns successfully" Jun 25 19:07:41.870090 containerd[1458]: time="2024-06-25T19:07:41.869546458Z" level=info msg="StopPodSandbox for \"c8fd43baa4dcadd4fc2ab4620969d8fdf98e7e6c4d31ea84d768fa2587290eb4\"" Jun 25 19:07:41.870090 containerd[1458]: time="2024-06-25T19:07:41.869629344Z" level=info msg="Container to stop \"8d434e2c5a81070190867b94a92bd118e1900f8a8bda6dcbdf7a26e526504684\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 25 19:07:41.880668 systemd[1]: cri-containerd-c8fd43baa4dcadd4fc2ab4620969d8fdf98e7e6c4d31ea84d768fa2587290eb4.scope: Deactivated successfully. Jun 25 19:07:41.919137 containerd[1458]: time="2024-06-25T19:07:41.919034258Z" level=info msg="shim disconnected" id=c8fd43baa4dcadd4fc2ab4620969d8fdf98e7e6c4d31ea84d768fa2587290eb4 namespace=k8s.io Jun 25 19:07:41.919696 containerd[1458]: time="2024-06-25T19:07:41.919365329Z" level=warning msg="cleaning up after shim disconnected" id=c8fd43baa4dcadd4fc2ab4620969d8fdf98e7e6c4d31ea84d768fa2587290eb4 namespace=k8s.io Jun 25 19:07:41.919696 containerd[1458]: time="2024-06-25T19:07:41.919524096Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 19:07:41.940559 containerd[1458]: time="2024-06-25T19:07:41.939993139Z" level=info msg="TearDown network for sandbox \"c8fd43baa4dcadd4fc2ab4620969d8fdf98e7e6c4d31ea84d768fa2587290eb4\" successfully" Jun 25 19:07:41.940559 containerd[1458]: time="2024-06-25T19:07:41.940394230Z" level=info msg="StopPodSandbox for \"c8fd43baa4dcadd4fc2ab4620969d8fdf98e7e6c4d31ea84d768fa2587290eb4\" returns successfully" Jun 25 19:07:41.971718 kubelet[2569]: I0625 19:07:41.971580 2569 topology_manager.go:215] "Topology Admit Handler" podUID="d375063d-a908-406b-b394-bb3e759700e0" podNamespace="calico-system" podName="calico-typha-d74b45f76-5z6zk" Jun 25 19:07:41.974411 kubelet[2569]: E0625 19:07:41.972939 2569 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a6801a0c-d3f5-459e-ae12-55e71353346e" containerName="calico-typha" Jun 25 19:07:41.974411 kubelet[2569]: I0625 19:07:41.972989 2569 memory_manager.go:346] "RemoveStaleState removing state" podUID="a6801a0c-d3f5-459e-ae12-55e71353346e" containerName="calico-typha" Jun 25 19:07:41.983757 systemd[1]: Created slice kubepods-besteffort-podd375063d_a908_406b_b394_bb3e759700e0.slice - libcontainer container kubepods-besteffort-podd375063d_a908_406b_b394_bb3e759700e0.slice. Jun 25 19:07:42.029982 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8d434e2c5a81070190867b94a92bd118e1900f8a8bda6dcbdf7a26e526504684-rootfs.mount: Deactivated successfully. Jun 25 19:07:42.030109 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c8fd43baa4dcadd4fc2ab4620969d8fdf98e7e6c4d31ea84d768fa2587290eb4-rootfs.mount: Deactivated successfully. Jun 25 19:07:42.030172 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c8fd43baa4dcadd4fc2ab4620969d8fdf98e7e6c4d31ea84d768fa2587290eb4-shm.mount: Deactivated successfully. Jun 25 19:07:42.044064 kubelet[2569]: E0625 19:07:42.043998 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:42.044064 kubelet[2569]: W0625 19:07:42.044016 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:42.044064 kubelet[2569]: E0625 19:07:42.044038 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:42.044356 kubelet[2569]: E0625 19:07:42.044215 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:42.044356 kubelet[2569]: W0625 19:07:42.044224 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:42.044356 kubelet[2569]: E0625 19:07:42.044237 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:42.044496 kubelet[2569]: E0625 19:07:42.044369 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:42.044496 kubelet[2569]: W0625 19:07:42.044377 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:42.044496 kubelet[2569]: E0625 19:07:42.044389 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:42.044651 kubelet[2569]: E0625 19:07:42.044540 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:42.044651 kubelet[2569]: W0625 19:07:42.044549 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:42.044651 kubelet[2569]: E0625 19:07:42.044560 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:42.044894 kubelet[2569]: E0625 19:07:42.044692 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:42.044894 kubelet[2569]: W0625 19:07:42.044700 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:42.044894 kubelet[2569]: E0625 19:07:42.044711 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:42.044996 kubelet[2569]: E0625 19:07:42.044925 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:42.044996 kubelet[2569]: W0625 19:07:42.044933 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:42.044996 kubelet[2569]: E0625 19:07:42.044945 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:42.045091 kubelet[2569]: E0625 19:07:42.045068 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:42.045091 kubelet[2569]: W0625 19:07:42.045082 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:42.045204 kubelet[2569]: E0625 19:07:42.045095 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:42.045267 kubelet[2569]: E0625 19:07:42.045246 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:42.045267 kubelet[2569]: W0625 19:07:42.045254 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:42.045267 kubelet[2569]: E0625 19:07:42.045265 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:42.045423 kubelet[2569]: E0625 19:07:42.045413 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:42.045423 kubelet[2569]: W0625 19:07:42.045422 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:42.045423 kubelet[2569]: E0625 19:07:42.045433 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:42.045638 kubelet[2569]: E0625 19:07:42.045594 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:42.045638 kubelet[2569]: W0625 19:07:42.045602 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:42.045638 kubelet[2569]: E0625 19:07:42.045613 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:42.045786 kubelet[2569]: E0625 19:07:42.045772 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:42.045786 kubelet[2569]: W0625 19:07:42.045785 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:42.045786 kubelet[2569]: E0625 19:07:42.045798 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:42.046027 kubelet[2569]: E0625 19:07:42.045965 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:42.046027 kubelet[2569]: W0625 19:07:42.045973 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:42.046027 kubelet[2569]: E0625 19:07:42.045986 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:42.145847 kubelet[2569]: E0625 19:07:42.142155 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:42.145847 kubelet[2569]: W0625 19:07:42.142186 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:42.145847 kubelet[2569]: E0625 19:07:42.142217 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:42.145847 kubelet[2569]: I0625 19:07:42.142328 2569 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a6801a0c-d3f5-459e-ae12-55e71353346e-tigera-ca-bundle\") pod \"a6801a0c-d3f5-459e-ae12-55e71353346e\" (UID: \"a6801a0c-d3f5-459e-ae12-55e71353346e\") " Jun 25 19:07:42.145847 kubelet[2569]: E0625 19:07:42.142811 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:42.145847 kubelet[2569]: W0625 19:07:42.142829 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:42.145847 kubelet[2569]: E0625 19:07:42.142892 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:42.145847 kubelet[2569]: I0625 19:07:42.142992 2569 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6lrr\" (UniqueName: \"kubernetes.io/projected/a6801a0c-d3f5-459e-ae12-55e71353346e-kube-api-access-c6lrr\") pod \"a6801a0c-d3f5-459e-ae12-55e71353346e\" (UID: \"a6801a0c-d3f5-459e-ae12-55e71353346e\") " Jun 25 19:07:42.145847 kubelet[2569]: E0625 19:07:42.143398 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:42.146516 kubelet[2569]: W0625 19:07:42.143415 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:42.146516 kubelet[2569]: E0625 19:07:42.143440 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:42.146516 kubelet[2569]: I0625 19:07:42.143481 2569 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/a6801a0c-d3f5-459e-ae12-55e71353346e-typha-certs\") pod \"a6801a0c-d3f5-459e-ae12-55e71353346e\" (UID: \"a6801a0c-d3f5-459e-ae12-55e71353346e\") " Jun 25 19:07:42.146516 kubelet[2569]: E0625 19:07:42.144184 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:42.146516 kubelet[2569]: W0625 19:07:42.144201 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:42.146516 kubelet[2569]: E0625 19:07:42.144226 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:42.146516 kubelet[2569]: I0625 19:07:42.144267 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhfq6\" (UniqueName: \"kubernetes.io/projected/d375063d-a908-406b-b394-bb3e759700e0-kube-api-access-zhfq6\") pod \"calico-typha-d74b45f76-5z6zk\" (UID: \"d375063d-a908-406b-b394-bb3e759700e0\") " pod="calico-system/calico-typha-d74b45f76-5z6zk" Jun 25 19:07:42.146516 kubelet[2569]: E0625 19:07:42.145060 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:42.150291 kubelet[2569]: W0625 19:07:42.145081 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:42.150291 kubelet[2569]: E0625 19:07:42.145105 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:42.150291 kubelet[2569]: I0625 19:07:42.145171 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d375063d-a908-406b-b394-bb3e759700e0-tigera-ca-bundle\") pod \"calico-typha-d74b45f76-5z6zk\" (UID: \"d375063d-a908-406b-b394-bb3e759700e0\") " pod="calico-system/calico-typha-d74b45f76-5z6zk" Jun 25 19:07:42.150291 kubelet[2569]: E0625 19:07:42.145946 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:42.150291 kubelet[2569]: W0625 19:07:42.145964 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:42.150291 kubelet[2569]: E0625 19:07:42.146025 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:42.150291 kubelet[2569]: I0625 19:07:42.146072 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/d375063d-a908-406b-b394-bb3e759700e0-typha-certs\") pod \"calico-typha-d74b45f76-5z6zk\" (UID: \"d375063d-a908-406b-b394-bb3e759700e0\") " pod="calico-system/calico-typha-d74b45f76-5z6zk" Jun 25 19:07:42.150291 kubelet[2569]: E0625 19:07:42.147454 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:42.153452 kubelet[2569]: W0625 19:07:42.147631 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:42.153452 kubelet[2569]: E0625 19:07:42.147685 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:42.154701 kubelet[2569]: E0625 19:07:42.153968 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:42.154701 kubelet[2569]: W0625 19:07:42.154026 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:42.154701 kubelet[2569]: E0625 19:07:42.154098 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:42.157080 kubelet[2569]: E0625 19:07:42.156681 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:42.157080 kubelet[2569]: W0625 19:07:42.156809 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:42.158310 kubelet[2569]: E0625 19:07:42.158283 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:42.162247 kubelet[2569]: E0625 19:07:42.159983 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:42.162247 kubelet[2569]: W0625 19:07:42.160011 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:42.162247 kubelet[2569]: E0625 19:07:42.160196 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:42.169837 systemd[1]: var-lib-kubelet-pods-a6801a0c\x2dd3f5\x2d459e\x2dae12\x2d55e71353346e-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dtypha-1.mount: Deactivated successfully. Jun 25 19:07:42.174855 kubelet[2569]: E0625 19:07:42.173455 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:42.174855 kubelet[2569]: W0625 19:07:42.173498 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:42.174855 kubelet[2569]: E0625 19:07:42.173536 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:42.175464 kubelet[2569]: E0625 19:07:42.175244 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:42.175464 kubelet[2569]: W0625 19:07:42.175279 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:42.175464 kubelet[2569]: E0625 19:07:42.175327 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:42.176375 kubelet[2569]: E0625 19:07:42.176118 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:42.176375 kubelet[2569]: W0625 19:07:42.176192 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:42.176375 kubelet[2569]: E0625 19:07:42.176229 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:42.177686 kubelet[2569]: E0625 19:07:42.177496 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:42.177686 kubelet[2569]: W0625 19:07:42.177525 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:42.178273 kubelet[2569]: E0625 19:07:42.178068 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:42.180045 kubelet[2569]: E0625 19:07:42.179922 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:42.180045 kubelet[2569]: W0625 19:07:42.179950 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:42.180838 kubelet[2569]: E0625 19:07:42.180264 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:42.187530 kubelet[2569]: I0625 19:07:42.185967 2569 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6801a0c-d3f5-459e-ae12-55e71353346e-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "a6801a0c-d3f5-459e-ae12-55e71353346e" (UID: "a6801a0c-d3f5-459e-ae12-55e71353346e"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jun 25 19:07:42.189828 kubelet[2569]: I0625 19:07:42.189605 2569 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6801a0c-d3f5-459e-ae12-55e71353346e-kube-api-access-c6lrr" (OuterVolumeSpecName: "kube-api-access-c6lrr") pod "a6801a0c-d3f5-459e-ae12-55e71353346e" (UID: "a6801a0c-d3f5-459e-ae12-55e71353346e"). InnerVolumeSpecName "kube-api-access-c6lrr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 25 19:07:42.192574 systemd[1]: var-lib-kubelet-pods-a6801a0c\x2dd3f5\x2d459e\x2dae12\x2d55e71353346e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dc6lrr.mount: Deactivated successfully. Jun 25 19:07:42.196503 kubelet[2569]: I0625 19:07:42.195511 2569 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6801a0c-d3f5-459e-ae12-55e71353346e-typha-certs" (OuterVolumeSpecName: "typha-certs") pod "a6801a0c-d3f5-459e-ae12-55e71353346e" (UID: "a6801a0c-d3f5-459e-ae12-55e71353346e"). InnerVolumeSpecName "typha-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jun 25 19:07:42.204712 systemd[1]: var-lib-kubelet-pods-a6801a0c\x2dd3f5\x2d459e\x2dae12\x2d55e71353346e-volumes-kubernetes.io\x7esecret-typha\x2dcerts.mount: Deactivated successfully. Jun 25 19:07:42.247225 kubelet[2569]: E0625 19:07:42.247155 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:42.247225 kubelet[2569]: W0625 19:07:42.247175 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:42.247225 kubelet[2569]: E0625 19:07:42.247195 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:42.249338 kubelet[2569]: E0625 19:07:42.248024 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:42.249338 kubelet[2569]: W0625 19:07:42.248034 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:42.249338 kubelet[2569]: E0625 19:07:42.248049 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:42.249338 kubelet[2569]: E0625 19:07:42.248508 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:42.249338 kubelet[2569]: W0625 19:07:42.248518 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:42.249338 kubelet[2569]: E0625 19:07:42.248576 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:42.249338 kubelet[2569]: I0625 19:07:42.248674 2569 reconciler_common.go:300] "Volume detached for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/a6801a0c-d3f5-459e-ae12-55e71353346e-typha-certs\") on node \"ci-4012-0-0-8-d63f105dc7.novalocal\" DevicePath \"\"" Jun 25 19:07:42.249338 kubelet[2569]: E0625 19:07:42.249098 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:42.249338 kubelet[2569]: W0625 19:07:42.249112 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:42.249338 kubelet[2569]: E0625 19:07:42.249131 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:42.249955 kubelet[2569]: I0625 19:07:42.249233 2569 reconciler_common.go:300] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a6801a0c-d3f5-459e-ae12-55e71353346e-tigera-ca-bundle\") on node \"ci-4012-0-0-8-d63f105dc7.novalocal\" DevicePath \"\"" Jun 25 19:07:42.249955 kubelet[2569]: I0625 19:07:42.249259 2569 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-c6lrr\" (UniqueName: \"kubernetes.io/projected/a6801a0c-d3f5-459e-ae12-55e71353346e-kube-api-access-c6lrr\") on node \"ci-4012-0-0-8-d63f105dc7.novalocal\" DevicePath \"\"" Jun 25 19:07:42.250466 kubelet[2569]: E0625 19:07:42.250060 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:42.250466 kubelet[2569]: W0625 19:07:42.250073 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:42.250466 kubelet[2569]: E0625 19:07:42.250094 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:42.250466 kubelet[2569]: E0625 19:07:42.250336 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:42.250466 kubelet[2569]: W0625 19:07:42.250345 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:42.250466 kubelet[2569]: E0625 19:07:42.250380 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:42.251010 systemd[1]: Removed slice kubepods-besteffort-poda6801a0c_d3f5_459e_ae12_55e71353346e.slice - libcontainer container kubepods-besteffort-poda6801a0c_d3f5_459e_ae12_55e71353346e.slice. Jun 25 19:07:42.251575 kubelet[2569]: E0625 19:07:42.251060 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:42.251575 kubelet[2569]: W0625 19:07:42.251303 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:42.251575 kubelet[2569]: E0625 19:07:42.251316 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:42.254530 kubelet[2569]: E0625 19:07:42.254465 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:42.254530 kubelet[2569]: W0625 19:07:42.254488 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:42.254530 kubelet[2569]: E0625 19:07:42.254510 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:42.256790 kubelet[2569]: E0625 19:07:42.256034 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:42.256790 kubelet[2569]: W0625 19:07:42.256052 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:42.256790 kubelet[2569]: E0625 19:07:42.256093 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:42.257400 kubelet[2569]: E0625 19:07:42.257377 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:42.257444 kubelet[2569]: W0625 19:07:42.257403 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:42.257444 kubelet[2569]: E0625 19:07:42.257420 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:42.258817 kubelet[2569]: E0625 19:07:42.258798 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:42.258817 kubelet[2569]: W0625 19:07:42.258812 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:42.258902 kubelet[2569]: E0625 19:07:42.258851 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:42.263036 kubelet[2569]: E0625 19:07:42.263007 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:42.263036 kubelet[2569]: W0625 19:07:42.263026 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:42.263149 kubelet[2569]: E0625 19:07:42.263043 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:42.263427 kubelet[2569]: E0625 19:07:42.263405 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:42.263427 kubelet[2569]: W0625 19:07:42.263420 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:42.263533 kubelet[2569]: E0625 19:07:42.263434 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:42.263702 kubelet[2569]: E0625 19:07:42.263684 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:42.263702 kubelet[2569]: W0625 19:07:42.263698 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:42.263904 kubelet[2569]: E0625 19:07:42.263712 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:42.264319 kubelet[2569]: E0625 19:07:42.264294 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:42.264319 kubelet[2569]: W0625 19:07:42.264310 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:42.264319 kubelet[2569]: E0625 19:07:42.264323 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:42.266178 kubelet[2569]: E0625 19:07:42.266120 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:42.266178 kubelet[2569]: W0625 19:07:42.266136 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:42.266178 kubelet[2569]: E0625 19:07:42.266151 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:42.266619 kubelet[2569]: E0625 19:07:42.266596 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:42.266619 kubelet[2569]: W0625 19:07:42.266610 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:42.266619 kubelet[2569]: E0625 19:07:42.266623 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:42.274817 kubelet[2569]: E0625 19:07:42.274783 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 19:07:42.274817 kubelet[2569]: W0625 19:07:42.274804 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 19:07:42.274817 kubelet[2569]: E0625 19:07:42.274823 2569 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 19:07:42.290109 containerd[1458]: time="2024-06-25T19:07:42.289934631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-d74b45f76-5z6zk,Uid:d375063d-a908-406b-b394-bb3e759700e0,Namespace:calico-system,Attempt:0,}" Jun 25 19:07:42.329511 containerd[1458]: time="2024-06-25T19:07:42.329279745Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 19:07:42.330060 containerd[1458]: time="2024-06-25T19:07:42.329605695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 19:07:42.330060 containerd[1458]: time="2024-06-25T19:07:42.329639038Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 19:07:42.330060 containerd[1458]: time="2024-06-25T19:07:42.329670196Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 19:07:42.351898 systemd[1]: Started cri-containerd-5b9f9ee9720c0320e1a1622c93e4fb244bda408f3b2d3aefafcfd06522ff7a18.scope - libcontainer container 5b9f9ee9720c0320e1a1622c93e4fb244bda408f3b2d3aefafcfd06522ff7a18. Jun 25 19:07:42.389644 containerd[1458]: time="2024-06-25T19:07:42.389607846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-d74b45f76-5z6zk,Uid:d375063d-a908-406b-b394-bb3e759700e0,Namespace:calico-system,Attempt:0,} returns sandbox id \"5b9f9ee9720c0320e1a1622c93e4fb244bda408f3b2d3aefafcfd06522ff7a18\"" Jun 25 19:07:42.400653 containerd[1458]: time="2024-06-25T19:07:42.400573148Z" level=info msg="CreateContainer within sandbox \"5b9f9ee9720c0320e1a1622c93e4fb244bda408f3b2d3aefafcfd06522ff7a18\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jun 25 19:07:42.411721 kubelet[2569]: I0625 19:07:42.411605 2569 scope.go:117] "RemoveContainer" containerID="8d434e2c5a81070190867b94a92bd118e1900f8a8bda6dcbdf7a26e526504684" Jun 25 19:07:42.418351 containerd[1458]: time="2024-06-25T19:07:42.418132660Z" level=info msg="RemoveContainer for \"8d434e2c5a81070190867b94a92bd118e1900f8a8bda6dcbdf7a26e526504684\"" Jun 25 19:07:42.434608 containerd[1458]: time="2024-06-25T19:07:42.434572211Z" level=info msg="RemoveContainer for \"8d434e2c5a81070190867b94a92bd118e1900f8a8bda6dcbdf7a26e526504684\" returns successfully" Jun 25 19:07:42.435189 kubelet[2569]: I0625 19:07:42.435160 2569 scope.go:117] "RemoveContainer" containerID="8d434e2c5a81070190867b94a92bd118e1900f8a8bda6dcbdf7a26e526504684" Jun 25 19:07:42.435571 containerd[1458]: time="2024-06-25T19:07:42.435541618Z" level=error msg="ContainerStatus for \"8d434e2c5a81070190867b94a92bd118e1900f8a8bda6dcbdf7a26e526504684\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8d434e2c5a81070190867b94a92bd118e1900f8a8bda6dcbdf7a26e526504684\": not found" Jun 25 19:07:42.435852 kubelet[2569]: E0625 19:07:42.435810 2569 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8d434e2c5a81070190867b94a92bd118e1900f8a8bda6dcbdf7a26e526504684\": not found" containerID="8d434e2c5a81070190867b94a92bd118e1900f8a8bda6dcbdf7a26e526504684" Jun 25 19:07:42.435955 kubelet[2569]: I0625 19:07:42.435883 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8d434e2c5a81070190867b94a92bd118e1900f8a8bda6dcbdf7a26e526504684"} err="failed to get container status \"8d434e2c5a81070190867b94a92bd118e1900f8a8bda6dcbdf7a26e526504684\": rpc error: code = NotFound desc = an error occurred when try to find container \"8d434e2c5a81070190867b94a92bd118e1900f8a8bda6dcbdf7a26e526504684\": not found" Jun 25 19:07:42.440085 containerd[1458]: time="2024-06-25T19:07:42.440008531Z" level=info msg="CreateContainer within sandbox \"5b9f9ee9720c0320e1a1622c93e4fb244bda408f3b2d3aefafcfd06522ff7a18\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"3b983db49c2d96ecce1869fe81ac0b79ddf41d49d8637788f03ca4c93837456c\"" Jun 25 19:07:42.440574 containerd[1458]: time="2024-06-25T19:07:42.440539336Z" level=info msg="StartContainer for \"3b983db49c2d96ecce1869fe81ac0b79ddf41d49d8637788f03ca4c93837456c\"" Jun 25 19:07:42.474918 systemd[1]: Started cri-containerd-3b983db49c2d96ecce1869fe81ac0b79ddf41d49d8637788f03ca4c93837456c.scope - libcontainer container 3b983db49c2d96ecce1869fe81ac0b79ddf41d49d8637788f03ca4c93837456c. Jun 25 19:07:42.523523 containerd[1458]: time="2024-06-25T19:07:42.523428874Z" level=info msg="StartContainer for \"3b983db49c2d96ecce1869fe81ac0b79ddf41d49d8637788f03ca4c93837456c\" returns successfully" Jun 25 19:07:42.961040 containerd[1458]: time="2024-06-25T19:07:42.960980506Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 19:07:42.963279 containerd[1458]: time="2024-06-25T19:07:42.963122582Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=5140568" Jun 25 19:07:42.964700 containerd[1458]: time="2024-06-25T19:07:42.964666796Z" level=info msg="ImageCreate event name:\"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 19:07:42.968419 containerd[1458]: time="2024-06-25T19:07:42.968364456Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 19:07:42.969440 containerd[1458]: time="2024-06-25T19:07:42.968927622Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6588288\" in 1.958669158s" Jun 25 19:07:42.969440 containerd[1458]: time="2024-06-25T19:07:42.968965192Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\"" Jun 25 19:07:42.972541 containerd[1458]: time="2024-06-25T19:07:42.972492503Z" level=info msg="CreateContainer within sandbox \"8e7bd251aa5fd8f9fbf8e355e4a1dc2e66c2015d26fae3df75679aff56de64ba\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jun 25 19:07:42.992895 containerd[1458]: time="2024-06-25T19:07:42.992836994Z" level=info msg="CreateContainer within sandbox \"8e7bd251aa5fd8f9fbf8e355e4a1dc2e66c2015d26fae3df75679aff56de64ba\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"0a13fca0683fe843ad5ec8cc16718267941776fa22269aa43f1d8f4e1a2b7dc9\"" Jun 25 19:07:42.993498 containerd[1458]: time="2024-06-25T19:07:42.993471885Z" level=info msg="StartContainer for \"0a13fca0683fe843ad5ec8cc16718267941776fa22269aa43f1d8f4e1a2b7dc9\"" Jun 25 19:07:43.032460 systemd[1]: Started cri-containerd-0a13fca0683fe843ad5ec8cc16718267941776fa22269aa43f1d8f4e1a2b7dc9.scope - libcontainer container 0a13fca0683fe843ad5ec8cc16718267941776fa22269aa43f1d8f4e1a2b7dc9. Jun 25 19:07:43.092077 containerd[1458]: time="2024-06-25T19:07:43.091749491Z" level=info msg="StartContainer for \"0a13fca0683fe843ad5ec8cc16718267941776fa22269aa43f1d8f4e1a2b7dc9\" returns successfully" Jun 25 19:07:43.125641 systemd[1]: cri-containerd-0a13fca0683fe843ad5ec8cc16718267941776fa22269aa43f1d8f4e1a2b7dc9.scope: Deactivated successfully. Jun 25 19:07:43.161470 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0a13fca0683fe843ad5ec8cc16718267941776fa22269aa43f1d8f4e1a2b7dc9-rootfs.mount: Deactivated successfully. Jun 25 19:07:43.179626 containerd[1458]: time="2024-06-25T19:07:43.179555410Z" level=info msg="shim disconnected" id=0a13fca0683fe843ad5ec8cc16718267941776fa22269aa43f1d8f4e1a2b7dc9 namespace=k8s.io Jun 25 19:07:43.179935 containerd[1458]: time="2024-06-25T19:07:43.179773028Z" level=warning msg="cleaning up after shim disconnected" id=0a13fca0683fe843ad5ec8cc16718267941776fa22269aa43f1d8f4e1a2b7dc9 namespace=k8s.io Jun 25 19:07:43.179935 containerd[1458]: time="2024-06-25T19:07:43.179792675Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 19:07:43.242014 kubelet[2569]: E0625 19:07:43.241892 2569 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r7zkp" podUID="1276367b-2bea-4184-b5ed-849c23171592" Jun 25 19:07:43.421019 containerd[1458]: time="2024-06-25T19:07:43.420956952Z" level=info msg="StopPodSandbox for \"8e7bd251aa5fd8f9fbf8e355e4a1dc2e66c2015d26fae3df75679aff56de64ba\"" Jun 25 19:07:43.421019 containerd[1458]: time="2024-06-25T19:07:43.421006024Z" level=info msg="Container to stop \"0a13fca0683fe843ad5ec8cc16718267941776fa22269aa43f1d8f4e1a2b7dc9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 25 19:07:43.427591 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8e7bd251aa5fd8f9fbf8e355e4a1dc2e66c2015d26fae3df75679aff56de64ba-shm.mount: Deactivated successfully. Jun 25 19:07:43.442448 systemd[1]: cri-containerd-8e7bd251aa5fd8f9fbf8e355e4a1dc2e66c2015d26fae3df75679aff56de64ba.scope: Deactivated successfully. Jun 25 19:07:43.496513 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8e7bd251aa5fd8f9fbf8e355e4a1dc2e66c2015d26fae3df75679aff56de64ba-rootfs.mount: Deactivated successfully. Jun 25 19:07:43.500937 containerd[1458]: time="2024-06-25T19:07:43.498836415Z" level=info msg="shim disconnected" id=8e7bd251aa5fd8f9fbf8e355e4a1dc2e66c2015d26fae3df75679aff56de64ba namespace=k8s.io Jun 25 19:07:43.500937 containerd[1458]: time="2024-06-25T19:07:43.498899824Z" level=warning msg="cleaning up after shim disconnected" id=8e7bd251aa5fd8f9fbf8e355e4a1dc2e66c2015d26fae3df75679aff56de64ba namespace=k8s.io Jun 25 19:07:43.500937 containerd[1458]: time="2024-06-25T19:07:43.498911365Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 19:07:43.513991 containerd[1458]: time="2024-06-25T19:07:43.513935717Z" level=warning msg="cleanup warnings time=\"2024-06-25T19:07:43Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jun 25 19:07:43.515204 containerd[1458]: time="2024-06-25T19:07:43.515179218Z" level=info msg="TearDown network for sandbox \"8e7bd251aa5fd8f9fbf8e355e4a1dc2e66c2015d26fae3df75679aff56de64ba\" successfully" Jun 25 19:07:43.515369 containerd[1458]: time="2024-06-25T19:07:43.515351189Z" level=info msg="StopPodSandbox for \"8e7bd251aa5fd8f9fbf8e355e4a1dc2e66c2015d26fae3df75679aff56de64ba\" returns successfully" Jun 25 19:07:43.674043 kubelet[2569]: I0625 19:07:43.673969 2569 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2f0d61c1-2eaf-4ad1-a143-59df76ba046c-lib-modules\") pod \"2f0d61c1-2eaf-4ad1-a143-59df76ba046c\" (UID: \"2f0d61c1-2eaf-4ad1-a143-59df76ba046c\") " Jun 25 19:07:43.674043 kubelet[2569]: I0625 19:07:43.674015 2569 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/2f0d61c1-2eaf-4ad1-a143-59df76ba046c-flexvol-driver-host\") pod \"2f0d61c1-2eaf-4ad1-a143-59df76ba046c\" (UID: \"2f0d61c1-2eaf-4ad1-a143-59df76ba046c\") " Jun 25 19:07:43.674043 kubelet[2569]: I0625 19:07:43.674037 2569 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2f0d61c1-2eaf-4ad1-a143-59df76ba046c-var-lib-calico\") pod \"2f0d61c1-2eaf-4ad1-a143-59df76ba046c\" (UID: \"2f0d61c1-2eaf-4ad1-a143-59df76ba046c\") " Jun 25 19:07:43.674043 kubelet[2569]: I0625 19:07:43.674069 2569 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/2f0d61c1-2eaf-4ad1-a143-59df76ba046c-node-certs\") pod \"2f0d61c1-2eaf-4ad1-a143-59df76ba046c\" (UID: \"2f0d61c1-2eaf-4ad1-a143-59df76ba046c\") " Jun 25 19:07:43.675032 kubelet[2569]: I0625 19:07:43.674090 2569 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2f0d61c1-2eaf-4ad1-a143-59df76ba046c-xtables-lock\") pod \"2f0d61c1-2eaf-4ad1-a143-59df76ba046c\" (UID: \"2f0d61c1-2eaf-4ad1-a143-59df76ba046c\") " Jun 25 19:07:43.675032 kubelet[2569]: I0625 19:07:43.674114 2569 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f0d61c1-2eaf-4ad1-a143-59df76ba046c-tigera-ca-bundle\") pod \"2f0d61c1-2eaf-4ad1-a143-59df76ba046c\" (UID: \"2f0d61c1-2eaf-4ad1-a143-59df76ba046c\") " Jun 25 19:07:43.675032 kubelet[2569]: I0625 19:07:43.674134 2569 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/2f0d61c1-2eaf-4ad1-a143-59df76ba046c-cni-net-dir\") pod \"2f0d61c1-2eaf-4ad1-a143-59df76ba046c\" (UID: \"2f0d61c1-2eaf-4ad1-a143-59df76ba046c\") " Jun 25 19:07:43.675032 kubelet[2569]: I0625 19:07:43.674156 2569 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/2f0d61c1-2eaf-4ad1-a143-59df76ba046c-policysync\") pod \"2f0d61c1-2eaf-4ad1-a143-59df76ba046c\" (UID: \"2f0d61c1-2eaf-4ad1-a143-59df76ba046c\") " Jun 25 19:07:43.675032 kubelet[2569]: I0625 19:07:43.674176 2569 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/2f0d61c1-2eaf-4ad1-a143-59df76ba046c-cni-bin-dir\") pod \"2f0d61c1-2eaf-4ad1-a143-59df76ba046c\" (UID: \"2f0d61c1-2eaf-4ad1-a143-59df76ba046c\") " Jun 25 19:07:43.675032 kubelet[2569]: I0625 19:07:43.674200 2569 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9knj\" (UniqueName: \"kubernetes.io/projected/2f0d61c1-2eaf-4ad1-a143-59df76ba046c-kube-api-access-z9knj\") pod \"2f0d61c1-2eaf-4ad1-a143-59df76ba046c\" (UID: \"2f0d61c1-2eaf-4ad1-a143-59df76ba046c\") " Jun 25 19:07:43.676055 kubelet[2569]: I0625 19:07:43.674220 2569 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/2f0d61c1-2eaf-4ad1-a143-59df76ba046c-var-run-calico\") pod \"2f0d61c1-2eaf-4ad1-a143-59df76ba046c\" (UID: \"2f0d61c1-2eaf-4ad1-a143-59df76ba046c\") " Jun 25 19:07:43.676055 kubelet[2569]: I0625 19:07:43.674243 2569 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/2f0d61c1-2eaf-4ad1-a143-59df76ba046c-cni-log-dir\") pod \"2f0d61c1-2eaf-4ad1-a143-59df76ba046c\" (UID: \"2f0d61c1-2eaf-4ad1-a143-59df76ba046c\") " Jun 25 19:07:43.676055 kubelet[2569]: I0625 19:07:43.674308 2569 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2f0d61c1-2eaf-4ad1-a143-59df76ba046c-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "2f0d61c1-2eaf-4ad1-a143-59df76ba046c" (UID: "2f0d61c1-2eaf-4ad1-a143-59df76ba046c"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 19:07:43.676055 kubelet[2569]: I0625 19:07:43.674342 2569 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2f0d61c1-2eaf-4ad1-a143-59df76ba046c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "2f0d61c1-2eaf-4ad1-a143-59df76ba046c" (UID: "2f0d61c1-2eaf-4ad1-a143-59df76ba046c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 19:07:43.676055 kubelet[2569]: I0625 19:07:43.674359 2569 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2f0d61c1-2eaf-4ad1-a143-59df76ba046c-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "2f0d61c1-2eaf-4ad1-a143-59df76ba046c" (UID: "2f0d61c1-2eaf-4ad1-a143-59df76ba046c"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 19:07:43.677516 kubelet[2569]: I0625 19:07:43.674376 2569 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2f0d61c1-2eaf-4ad1-a143-59df76ba046c-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "2f0d61c1-2eaf-4ad1-a143-59df76ba046c" (UID: "2f0d61c1-2eaf-4ad1-a143-59df76ba046c"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 19:07:43.677516 kubelet[2569]: I0625 19:07:43.674662 2569 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2f0d61c1-2eaf-4ad1-a143-59df76ba046c-policysync" (OuterVolumeSpecName: "policysync") pod "2f0d61c1-2eaf-4ad1-a143-59df76ba046c" (UID: "2f0d61c1-2eaf-4ad1-a143-59df76ba046c"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 19:07:43.677516 kubelet[2569]: I0625 19:07:43.674686 2569 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2f0d61c1-2eaf-4ad1-a143-59df76ba046c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "2f0d61c1-2eaf-4ad1-a143-59df76ba046c" (UID: "2f0d61c1-2eaf-4ad1-a143-59df76ba046c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 19:07:43.677516 kubelet[2569]: I0625 19:07:43.674987 2569 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f0d61c1-2eaf-4ad1-a143-59df76ba046c-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "2f0d61c1-2eaf-4ad1-a143-59df76ba046c" (UID: "2f0d61c1-2eaf-4ad1-a143-59df76ba046c"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jun 25 19:07:43.677516 kubelet[2569]: I0625 19:07:43.675012 2569 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2f0d61c1-2eaf-4ad1-a143-59df76ba046c-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "2f0d61c1-2eaf-4ad1-a143-59df76ba046c" (UID: "2f0d61c1-2eaf-4ad1-a143-59df76ba046c"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 19:07:43.682099 kubelet[2569]: I0625 19:07:43.677978 2569 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2f0d61c1-2eaf-4ad1-a143-59df76ba046c-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "2f0d61c1-2eaf-4ad1-a143-59df76ba046c" (UID: "2f0d61c1-2eaf-4ad1-a143-59df76ba046c"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 19:07:43.682099 kubelet[2569]: I0625 19:07:43.678013 2569 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2f0d61c1-2eaf-4ad1-a143-59df76ba046c-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "2f0d61c1-2eaf-4ad1-a143-59df76ba046c" (UID: "2f0d61c1-2eaf-4ad1-a143-59df76ba046c"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 19:07:43.682099 kubelet[2569]: I0625 19:07:43.680768 2569 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f0d61c1-2eaf-4ad1-a143-59df76ba046c-node-certs" (OuterVolumeSpecName: "node-certs") pod "2f0d61c1-2eaf-4ad1-a143-59df76ba046c" (UID: "2f0d61c1-2eaf-4ad1-a143-59df76ba046c"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jun 25 19:07:43.688950 systemd[1]: var-lib-kubelet-pods-2f0d61c1\x2d2eaf\x2d4ad1\x2da143\x2d59df76ba046c-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Jun 25 19:07:43.690982 kubelet[2569]: I0625 19:07:43.690931 2569 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f0d61c1-2eaf-4ad1-a143-59df76ba046c-kube-api-access-z9knj" (OuterVolumeSpecName: "kube-api-access-z9knj") pod "2f0d61c1-2eaf-4ad1-a143-59df76ba046c" (UID: "2f0d61c1-2eaf-4ad1-a143-59df76ba046c"). InnerVolumeSpecName "kube-api-access-z9knj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 25 19:07:43.774779 kubelet[2569]: I0625 19:07:43.774617 2569 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2f0d61c1-2eaf-4ad1-a143-59df76ba046c-xtables-lock\") on node \"ci-4012-0-0-8-d63f105dc7.novalocal\" DevicePath \"\"" Jun 25 19:07:43.774779 kubelet[2569]: I0625 19:07:43.774714 2569 reconciler_common.go:300] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f0d61c1-2eaf-4ad1-a143-59df76ba046c-tigera-ca-bundle\") on node \"ci-4012-0-0-8-d63f105dc7.novalocal\" DevicePath \"\"" Jun 25 19:07:43.775490 kubelet[2569]: I0625 19:07:43.775453 2569 reconciler_common.go:300] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/2f0d61c1-2eaf-4ad1-a143-59df76ba046c-cni-net-dir\") on node \"ci-4012-0-0-8-d63f105dc7.novalocal\" DevicePath \"\"" Jun 25 19:07:43.775872 kubelet[2569]: I0625 19:07:43.775679 2569 reconciler_common.go:300] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/2f0d61c1-2eaf-4ad1-a143-59df76ba046c-policysync\") on node \"ci-4012-0-0-8-d63f105dc7.novalocal\" DevicePath \"\"" Jun 25 19:07:43.775872 kubelet[2569]: I0625 19:07:43.775725 2569 reconciler_common.go:300] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/2f0d61c1-2eaf-4ad1-a143-59df76ba046c-cni-bin-dir\") on node \"ci-4012-0-0-8-d63f105dc7.novalocal\" DevicePath \"\"" Jun 25 19:07:43.775872 kubelet[2569]: I0625 19:07:43.775758 2569 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-z9knj\" (UniqueName: \"kubernetes.io/projected/2f0d61c1-2eaf-4ad1-a143-59df76ba046c-kube-api-access-z9knj\") on node \"ci-4012-0-0-8-d63f105dc7.novalocal\" DevicePath \"\"" Jun 25 19:07:43.775872 kubelet[2569]: I0625 19:07:43.775779 2569 reconciler_common.go:300] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/2f0d61c1-2eaf-4ad1-a143-59df76ba046c-var-run-calico\") on node \"ci-4012-0-0-8-d63f105dc7.novalocal\" DevicePath \"\"" Jun 25 19:07:43.776377 kubelet[2569]: I0625 19:07:43.776144 2569 reconciler_common.go:300] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/2f0d61c1-2eaf-4ad1-a143-59df76ba046c-cni-log-dir\") on node \"ci-4012-0-0-8-d63f105dc7.novalocal\" DevicePath \"\"" Jun 25 19:07:43.776377 kubelet[2569]: I0625 19:07:43.776272 2569 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2f0d61c1-2eaf-4ad1-a143-59df76ba046c-lib-modules\") on node \"ci-4012-0-0-8-d63f105dc7.novalocal\" DevicePath \"\"" Jun 25 19:07:43.776377 kubelet[2569]: I0625 19:07:43.776291 2569 reconciler_common.go:300] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/2f0d61c1-2eaf-4ad1-a143-59df76ba046c-flexvol-driver-host\") on node \"ci-4012-0-0-8-d63f105dc7.novalocal\" DevicePath \"\"" Jun 25 19:07:43.776377 kubelet[2569]: I0625 19:07:43.776322 2569 reconciler_common.go:300] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2f0d61c1-2eaf-4ad1-a143-59df76ba046c-var-lib-calico\") on node \"ci-4012-0-0-8-d63f105dc7.novalocal\" DevicePath \"\"" Jun 25 19:07:43.776377 kubelet[2569]: I0625 19:07:43.776362 2569 reconciler_common.go:300] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/2f0d61c1-2eaf-4ad1-a143-59df76ba046c-node-certs\") on node \"ci-4012-0-0-8-d63f105dc7.novalocal\" DevicePath \"\"" Jun 25 19:07:44.037356 systemd[1]: var-lib-kubelet-pods-2f0d61c1\x2d2eaf\x2d4ad1\x2da143\x2d59df76ba046c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dz9knj.mount: Deactivated successfully. Jun 25 19:07:44.248150 kubelet[2569]: I0625 19:07:44.248006 2569 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="a6801a0c-d3f5-459e-ae12-55e71353346e" path="/var/lib/kubelet/pods/a6801a0c-d3f5-459e-ae12-55e71353346e/volumes" Jun 25 19:07:44.256983 systemd[1]: Removed slice kubepods-besteffort-pod2f0d61c1_2eaf_4ad1_a143_59df76ba046c.slice - libcontainer container kubepods-besteffort-pod2f0d61c1_2eaf_4ad1_a143_59df76ba046c.slice. Jun 25 19:07:44.447304 kubelet[2569]: I0625 19:07:44.446220 2569 scope.go:117] "RemoveContainer" containerID="0a13fca0683fe843ad5ec8cc16718267941776fa22269aa43f1d8f4e1a2b7dc9" Jun 25 19:07:44.447304 kubelet[2569]: I0625 19:07:44.446348 2569 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 25 19:07:44.452598 containerd[1458]: time="2024-06-25T19:07:44.452461128Z" level=info msg="RemoveContainer for \"0a13fca0683fe843ad5ec8cc16718267941776fa22269aa43f1d8f4e1a2b7dc9\"" Jun 25 19:07:44.460141 containerd[1458]: time="2024-06-25T19:07:44.460075279Z" level=info msg="RemoveContainer for \"0a13fca0683fe843ad5ec8cc16718267941776fa22269aa43f1d8f4e1a2b7dc9\" returns successfully" Jun 25 19:07:44.474962 kubelet[2569]: I0625 19:07:44.474282 2569 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-d74b45f76-5z6zk" podStartSLOduration=6.474215725 podCreationTimestamp="2024-06-25 19:07:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 19:07:43.481469974 +0000 UTC m=+25.529379105" watchObservedRunningTime="2024-06-25 19:07:44.474215725 +0000 UTC m=+26.522124876" Jun 25 19:07:44.514148 kubelet[2569]: I0625 19:07:44.514119 2569 topology_manager.go:215] "Topology Admit Handler" podUID="88f4c080-fa16-438a-af28-12dafc83495c" podNamespace="calico-system" podName="calico-node-x8sk5" Jun 25 19:07:44.514339 kubelet[2569]: E0625 19:07:44.514328 2569 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2f0d61c1-2eaf-4ad1-a143-59df76ba046c" containerName="flexvol-driver" Jun 25 19:07:44.514432 kubelet[2569]: I0625 19:07:44.514420 2569 memory_manager.go:346] "RemoveStaleState removing state" podUID="2f0d61c1-2eaf-4ad1-a143-59df76ba046c" containerName="flexvol-driver" Jun 25 19:07:44.529705 systemd[1]: Created slice kubepods-besteffort-pod88f4c080_fa16_438a_af28_12dafc83495c.slice - libcontainer container kubepods-besteffort-pod88f4c080_fa16_438a_af28_12dafc83495c.slice. Jun 25 19:07:44.680845 kubelet[2569]: I0625 19:07:44.680706 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/88f4c080-fa16-438a-af28-12dafc83495c-lib-modules\") pod \"calico-node-x8sk5\" (UID: \"88f4c080-fa16-438a-af28-12dafc83495c\") " pod="calico-system/calico-node-x8sk5" Jun 25 19:07:44.680845 kubelet[2569]: I0625 19:07:44.680857 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/88f4c080-fa16-438a-af28-12dafc83495c-policysync\") pod \"calico-node-x8sk5\" (UID: \"88f4c080-fa16-438a-af28-12dafc83495c\") " pod="calico-system/calico-node-x8sk5" Jun 25 19:07:44.681524 kubelet[2569]: I0625 19:07:44.680922 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/88f4c080-fa16-438a-af28-12dafc83495c-cni-bin-dir\") pod \"calico-node-x8sk5\" (UID: \"88f4c080-fa16-438a-af28-12dafc83495c\") " pod="calico-system/calico-node-x8sk5" Jun 25 19:07:44.681524 kubelet[2569]: I0625 19:07:44.680983 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/88f4c080-fa16-438a-af28-12dafc83495c-flexvol-driver-host\") pod \"calico-node-x8sk5\" (UID: \"88f4c080-fa16-438a-af28-12dafc83495c\") " pod="calico-system/calico-node-x8sk5" Jun 25 19:07:44.681524 kubelet[2569]: I0625 19:07:44.681044 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/88f4c080-fa16-438a-af28-12dafc83495c-cni-log-dir\") pod \"calico-node-x8sk5\" (UID: \"88f4c080-fa16-438a-af28-12dafc83495c\") " pod="calico-system/calico-node-x8sk5" Jun 25 19:07:44.681524 kubelet[2569]: I0625 19:07:44.681100 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/88f4c080-fa16-438a-af28-12dafc83495c-xtables-lock\") pod \"calico-node-x8sk5\" (UID: \"88f4c080-fa16-438a-af28-12dafc83495c\") " pod="calico-system/calico-node-x8sk5" Jun 25 19:07:44.681524 kubelet[2569]: I0625 19:07:44.681153 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/88f4c080-fa16-438a-af28-12dafc83495c-cni-net-dir\") pod \"calico-node-x8sk5\" (UID: \"88f4c080-fa16-438a-af28-12dafc83495c\") " pod="calico-system/calico-node-x8sk5" Jun 25 19:07:44.681919 kubelet[2569]: I0625 19:07:44.681209 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/88f4c080-fa16-438a-af28-12dafc83495c-var-run-calico\") pod \"calico-node-x8sk5\" (UID: \"88f4c080-fa16-438a-af28-12dafc83495c\") " pod="calico-system/calico-node-x8sk5" Jun 25 19:07:44.681919 kubelet[2569]: I0625 19:07:44.681265 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/88f4c080-fa16-438a-af28-12dafc83495c-node-certs\") pod \"calico-node-x8sk5\" (UID: \"88f4c080-fa16-438a-af28-12dafc83495c\") " pod="calico-system/calico-node-x8sk5" Jun 25 19:07:44.681919 kubelet[2569]: I0625 19:07:44.681353 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/88f4c080-fa16-438a-af28-12dafc83495c-var-lib-calico\") pod \"calico-node-x8sk5\" (UID: \"88f4c080-fa16-438a-af28-12dafc83495c\") " pod="calico-system/calico-node-x8sk5" Jun 25 19:07:44.681919 kubelet[2569]: I0625 19:07:44.681416 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppmbz\" (UniqueName: \"kubernetes.io/projected/88f4c080-fa16-438a-af28-12dafc83495c-kube-api-access-ppmbz\") pod \"calico-node-x8sk5\" (UID: \"88f4c080-fa16-438a-af28-12dafc83495c\") " pod="calico-system/calico-node-x8sk5" Jun 25 19:07:44.681919 kubelet[2569]: I0625 19:07:44.681474 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/88f4c080-fa16-438a-af28-12dafc83495c-tigera-ca-bundle\") pod \"calico-node-x8sk5\" (UID: \"88f4c080-fa16-438a-af28-12dafc83495c\") " pod="calico-system/calico-node-x8sk5" Jun 25 19:07:44.837531 containerd[1458]: time="2024-06-25T19:07:44.835396250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-x8sk5,Uid:88f4c080-fa16-438a-af28-12dafc83495c,Namespace:calico-system,Attempt:0,}" Jun 25 19:07:44.891885 containerd[1458]: time="2024-06-25T19:07:44.891414523Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 19:07:44.891885 containerd[1458]: time="2024-06-25T19:07:44.891608005Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 19:07:44.891885 containerd[1458]: time="2024-06-25T19:07:44.891706048Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 19:07:44.892239 containerd[1458]: time="2024-06-25T19:07:44.891917585Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 19:07:44.917243 systemd[1]: Started cri-containerd-a9660890b908b8f4d1b28a29aa794c5336a03905530c6d8ed32854eb6bd0be0d.scope - libcontainer container a9660890b908b8f4d1b28a29aa794c5336a03905530c6d8ed32854eb6bd0be0d. Jun 25 19:07:44.942449 containerd[1458]: time="2024-06-25T19:07:44.942416382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-x8sk5,Uid:88f4c080-fa16-438a-af28-12dafc83495c,Namespace:calico-system,Attempt:0,} returns sandbox id \"a9660890b908b8f4d1b28a29aa794c5336a03905530c6d8ed32854eb6bd0be0d\"" Jun 25 19:07:44.948045 containerd[1458]: time="2024-06-25T19:07:44.947602532Z" level=info msg="CreateContainer within sandbox \"a9660890b908b8f4d1b28a29aa794c5336a03905530c6d8ed32854eb6bd0be0d\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jun 25 19:07:44.967544 containerd[1458]: time="2024-06-25T19:07:44.967498015Z" level=info msg="CreateContainer within sandbox \"a9660890b908b8f4d1b28a29aa794c5336a03905530c6d8ed32854eb6bd0be0d\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"1c6e8869762449ab9e7ed1fca16744ec31039d968bd63679da3a6cca1c943d5d\"" Jun 25 19:07:44.969162 containerd[1458]: time="2024-06-25T19:07:44.968005556Z" level=info msg="StartContainer for \"1c6e8869762449ab9e7ed1fca16744ec31039d968bd63679da3a6cca1c943d5d\"" Jun 25 19:07:44.993892 systemd[1]: Started cri-containerd-1c6e8869762449ab9e7ed1fca16744ec31039d968bd63679da3a6cca1c943d5d.scope - libcontainer container 1c6e8869762449ab9e7ed1fca16744ec31039d968bd63679da3a6cca1c943d5d. Jun 25 19:07:45.025949 containerd[1458]: time="2024-06-25T19:07:45.025639980Z" level=info msg="StartContainer for \"1c6e8869762449ab9e7ed1fca16744ec31039d968bd63679da3a6cca1c943d5d\" returns successfully" Jun 25 19:07:45.040931 systemd[1]: cri-containerd-1c6e8869762449ab9e7ed1fca16744ec31039d968bd63679da3a6cca1c943d5d.scope: Deactivated successfully. Jun 25 19:07:45.068337 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1c6e8869762449ab9e7ed1fca16744ec31039d968bd63679da3a6cca1c943d5d-rootfs.mount: Deactivated successfully. Jun 25 19:07:45.071915 containerd[1458]: time="2024-06-25T19:07:45.071832507Z" level=info msg="shim disconnected" id=1c6e8869762449ab9e7ed1fca16744ec31039d968bd63679da3a6cca1c943d5d namespace=k8s.io Jun 25 19:07:45.071915 containerd[1458]: time="2024-06-25T19:07:45.071903320Z" level=warning msg="cleaning up after shim disconnected" id=1c6e8869762449ab9e7ed1fca16744ec31039d968bd63679da3a6cca1c943d5d namespace=k8s.io Jun 25 19:07:45.071915 containerd[1458]: time="2024-06-25T19:07:45.071914721Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 19:07:45.242390 kubelet[2569]: E0625 19:07:45.242335 2569 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r7zkp" podUID="1276367b-2bea-4184-b5ed-849c23171592" Jun 25 19:07:45.446107 containerd[1458]: time="2024-06-25T19:07:45.446004430Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Jun 25 19:07:46.253381 kubelet[2569]: I0625 19:07:46.252600 2569 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="2f0d61c1-2eaf-4ad1-a143-59df76ba046c" path="/var/lib/kubelet/pods/2f0d61c1-2eaf-4ad1-a143-59df76ba046c/volumes" Jun 25 19:07:47.245293 kubelet[2569]: E0625 19:07:47.244929 2569 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r7zkp" podUID="1276367b-2bea-4184-b5ed-849c23171592" Jun 25 19:07:49.241790 kubelet[2569]: E0625 19:07:49.241665 2569 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r7zkp" podUID="1276367b-2bea-4184-b5ed-849c23171592" Jun 25 19:07:51.242437 kubelet[2569]: E0625 19:07:51.242378 2569 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r7zkp" podUID="1276367b-2bea-4184-b5ed-849c23171592" Jun 25 19:07:51.352117 containerd[1458]: time="2024-06-25T19:07:51.351848371Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 19:07:51.354496 containerd[1458]: time="2024-06-25T19:07:51.354401868Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=93087850" Jun 25 19:07:51.356181 containerd[1458]: time="2024-06-25T19:07:51.356047884Z" level=info msg="ImageCreate event name:\"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 19:07:51.361409 containerd[1458]: time="2024-06-25T19:07:51.361277098Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 19:07:51.363791 containerd[1458]: time="2024-06-25T19:07:51.363236041Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"94535610\" in 5.917154867s" Jun 25 19:07:51.363791 containerd[1458]: time="2024-06-25T19:07:51.363304529Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\"" Jun 25 19:07:51.369132 containerd[1458]: time="2024-06-25T19:07:51.368926409Z" level=info msg="CreateContainer within sandbox \"a9660890b908b8f4d1b28a29aa794c5336a03905530c6d8ed32854eb6bd0be0d\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jun 25 19:07:51.722280 containerd[1458]: time="2024-06-25T19:07:51.722066191Z" level=info msg="CreateContainer within sandbox \"a9660890b908b8f4d1b28a29aa794c5336a03905530c6d8ed32854eb6bd0be0d\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"cad8970c4fb98e7b82412ca8f5d034dc334d60283c64694136c77e77459a8292\"" Jun 25 19:07:51.723067 containerd[1458]: time="2024-06-25T19:07:51.722847776Z" level=info msg="StartContainer for \"cad8970c4fb98e7b82412ca8f5d034dc334d60283c64694136c77e77459a8292\"" Jun 25 19:07:51.960142 systemd[1]: Started cri-containerd-cad8970c4fb98e7b82412ca8f5d034dc334d60283c64694136c77e77459a8292.scope - libcontainer container cad8970c4fb98e7b82412ca8f5d034dc334d60283c64694136c77e77459a8292. Jun 25 19:07:52.115016 containerd[1458]: time="2024-06-25T19:07:52.114851435Z" level=info msg="StartContainer for \"cad8970c4fb98e7b82412ca8f5d034dc334d60283c64694136c77e77459a8292\" returns successfully" Jun 25 19:07:53.244790 kubelet[2569]: E0625 19:07:53.242409 2569 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r7zkp" podUID="1276367b-2bea-4184-b5ed-849c23171592" Jun 25 19:07:53.444039 systemd[1]: cri-containerd-cad8970c4fb98e7b82412ca8f5d034dc334d60283c64694136c77e77459a8292.scope: Deactivated successfully. Jun 25 19:07:53.475081 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cad8970c4fb98e7b82412ca8f5d034dc334d60283c64694136c77e77459a8292-rootfs.mount: Deactivated successfully. Jun 25 19:07:53.482270 containerd[1458]: time="2024-06-25T19:07:53.482047043Z" level=info msg="shim disconnected" id=cad8970c4fb98e7b82412ca8f5d034dc334d60283c64694136c77e77459a8292 namespace=k8s.io Jun 25 19:07:53.482270 containerd[1458]: time="2024-06-25T19:07:53.482119799Z" level=warning msg="cleaning up after shim disconnected" id=cad8970c4fb98e7b82412ca8f5d034dc334d60283c64694136c77e77459a8292 namespace=k8s.io Jun 25 19:07:53.482270 containerd[1458]: time="2024-06-25T19:07:53.482131732Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 19:07:53.500811 kubelet[2569]: I0625 19:07:53.499787 2569 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Jun 25 19:07:53.532918 kubelet[2569]: I0625 19:07:53.532707 2569 topology_manager.go:215] "Topology Admit Handler" podUID="b8d477a9-5ba0-42a2-8679-d220a4893fb5" podNamespace="kube-system" podName="coredns-5dd5756b68-wkwjk" Jun 25 19:07:53.541060 kubelet[2569]: I0625 19:07:53.539418 2569 topology_manager.go:215] "Topology Admit Handler" podUID="2a923228-560b-43c0-8f94-d47d8f47139d" podNamespace="kube-system" podName="coredns-5dd5756b68-kxsw8" Jun 25 19:07:53.543462 kubelet[2569]: I0625 19:07:53.543443 2569 topology_manager.go:215] "Topology Admit Handler" podUID="64e26d3e-5506-4e69-921b-3b06d3154cdc" podNamespace="calico-system" podName="calico-kube-controllers-948b949f9-lffnp" Jun 25 19:07:53.551629 systemd[1]: Created slice kubepods-burstable-podb8d477a9_5ba0_42a2_8679_d220a4893fb5.slice - libcontainer container kubepods-burstable-podb8d477a9_5ba0_42a2_8679_d220a4893fb5.slice. Jun 25 19:07:53.561815 systemd[1]: Created slice kubepods-burstable-pod2a923228_560b_43c0_8f94_d47d8f47139d.slice - libcontainer container kubepods-burstable-pod2a923228_560b_43c0_8f94_d47d8f47139d.slice. Jun 25 19:07:53.571639 systemd[1]: Created slice kubepods-besteffort-pod64e26d3e_5506_4e69_921b_3b06d3154cdc.slice - libcontainer container kubepods-besteffort-pod64e26d3e_5506_4e69_921b_3b06d3154cdc.slice. Jun 25 19:07:53.638870 kubelet[2569]: I0625 19:07:53.638842 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a923228-560b-43c0-8f94-d47d8f47139d-config-volume\") pod \"coredns-5dd5756b68-kxsw8\" (UID: \"2a923228-560b-43c0-8f94-d47d8f47139d\") " pod="kube-system/coredns-5dd5756b68-kxsw8" Jun 25 19:07:53.639108 kubelet[2569]: I0625 19:07:53.639098 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/64e26d3e-5506-4e69-921b-3b06d3154cdc-tigera-ca-bundle\") pod \"calico-kube-controllers-948b949f9-lffnp\" (UID: \"64e26d3e-5506-4e69-921b-3b06d3154cdc\") " pod="calico-system/calico-kube-controllers-948b949f9-lffnp" Jun 25 19:07:53.639219 kubelet[2569]: I0625 19:07:53.639209 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ll6xt\" (UniqueName: \"kubernetes.io/projected/64e26d3e-5506-4e69-921b-3b06d3154cdc-kube-api-access-ll6xt\") pod \"calico-kube-controllers-948b949f9-lffnp\" (UID: \"64e26d3e-5506-4e69-921b-3b06d3154cdc\") " pod="calico-system/calico-kube-controllers-948b949f9-lffnp" Jun 25 19:07:53.639325 kubelet[2569]: I0625 19:07:53.639315 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nf67f\" (UniqueName: \"kubernetes.io/projected/b8d477a9-5ba0-42a2-8679-d220a4893fb5-kube-api-access-nf67f\") pod \"coredns-5dd5756b68-wkwjk\" (UID: \"b8d477a9-5ba0-42a2-8679-d220a4893fb5\") " pod="kube-system/coredns-5dd5756b68-wkwjk" Jun 25 19:07:53.639433 kubelet[2569]: I0625 19:07:53.639422 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vjdt\" (UniqueName: \"kubernetes.io/projected/2a923228-560b-43c0-8f94-d47d8f47139d-kube-api-access-2vjdt\") pod \"coredns-5dd5756b68-kxsw8\" (UID: \"2a923228-560b-43c0-8f94-d47d8f47139d\") " pod="kube-system/coredns-5dd5756b68-kxsw8" Jun 25 19:07:53.639531 kubelet[2569]: I0625 19:07:53.639521 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b8d477a9-5ba0-42a2-8679-d220a4893fb5-config-volume\") pod \"coredns-5dd5756b68-wkwjk\" (UID: \"b8d477a9-5ba0-42a2-8679-d220a4893fb5\") " pod="kube-system/coredns-5dd5756b68-wkwjk" Jun 25 19:07:53.857019 containerd[1458]: time="2024-06-25T19:07:53.856796069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-wkwjk,Uid:b8d477a9-5ba0-42a2-8679-d220a4893fb5,Namespace:kube-system,Attempt:0,}" Jun 25 19:07:53.870106 containerd[1458]: time="2024-06-25T19:07:53.870033628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-kxsw8,Uid:2a923228-560b-43c0-8f94-d47d8f47139d,Namespace:kube-system,Attempt:0,}" Jun 25 19:07:53.876898 containerd[1458]: time="2024-06-25T19:07:53.876839168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-948b949f9-lffnp,Uid:64e26d3e-5506-4e69-921b-3b06d3154cdc,Namespace:calico-system,Attempt:0,}" Jun 25 19:07:54.167502 containerd[1458]: time="2024-06-25T19:07:54.167146981Z" level=error msg="Failed to destroy network for sandbox \"e47b070cb38fa78d406e523889a4b2713f6ccbe63fd4df04b2ffa250c8dd8a2b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 19:07:54.172400 containerd[1458]: time="2024-06-25T19:07:54.171696842Z" level=error msg="Failed to destroy network for sandbox \"4e5c4a18bda1829fd9a90bce9a6cada7714a2e297fa9879b3f0bcddfa4e3996c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 19:07:54.175859 containerd[1458]: time="2024-06-25T19:07:54.175799554Z" level=error msg="encountered an error cleaning up failed sandbox \"4e5c4a18bda1829fd9a90bce9a6cada7714a2e297fa9879b3f0bcddfa4e3996c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 19:07:54.176867 containerd[1458]: time="2024-06-25T19:07:54.176821831Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-948b949f9-lffnp,Uid:64e26d3e-5506-4e69-921b-3b06d3154cdc,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4e5c4a18bda1829fd9a90bce9a6cada7714a2e297fa9879b3f0bcddfa4e3996c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 19:07:54.176971 containerd[1458]: time="2024-06-25T19:07:54.175855649Z" level=error msg="encountered an error cleaning up failed sandbox \"e47b070cb38fa78d406e523889a4b2713f6ccbe63fd4df04b2ffa250c8dd8a2b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 19:07:54.177020 containerd[1458]: time="2024-06-25T19:07:54.176958828Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-wkwjk,Uid:b8d477a9-5ba0-42a2-8679-d220a4893fb5,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e47b070cb38fa78d406e523889a4b2713f6ccbe63fd4df04b2ffa250c8dd8a2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 19:07:54.177195 kubelet[2569]: E0625 19:07:54.177170 2569 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e47b070cb38fa78d406e523889a4b2713f6ccbe63fd4df04b2ffa250c8dd8a2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 19:07:54.177268 kubelet[2569]: E0625 19:07:54.177233 2569 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e47b070cb38fa78d406e523889a4b2713f6ccbe63fd4df04b2ffa250c8dd8a2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-wkwjk" Jun 25 19:07:54.177268 kubelet[2569]: E0625 19:07:54.177258 2569 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e47b070cb38fa78d406e523889a4b2713f6ccbe63fd4df04b2ffa250c8dd8a2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-wkwjk" Jun 25 19:07:54.177323 kubelet[2569]: E0625 19:07:54.177314 2569 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-wkwjk_kube-system(b8d477a9-5ba0-42a2-8679-d220a4893fb5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-wkwjk_kube-system(b8d477a9-5ba0-42a2-8679-d220a4893fb5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e47b070cb38fa78d406e523889a4b2713f6ccbe63fd4df04b2ffa250c8dd8a2b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-wkwjk" podUID="b8d477a9-5ba0-42a2-8679-d220a4893fb5" Jun 25 19:07:54.179856 kubelet[2569]: E0625 19:07:54.177402 2569 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e5c4a18bda1829fd9a90bce9a6cada7714a2e297fa9879b3f0bcddfa4e3996c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 19:07:54.179856 kubelet[2569]: E0625 19:07:54.177460 2569 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e5c4a18bda1829fd9a90bce9a6cada7714a2e297fa9879b3f0bcddfa4e3996c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-948b949f9-lffnp" Jun 25 19:07:54.179856 kubelet[2569]: E0625 19:07:54.179770 2569 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e5c4a18bda1829fd9a90bce9a6cada7714a2e297fa9879b3f0bcddfa4e3996c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-948b949f9-lffnp" Jun 25 19:07:54.180094 kubelet[2569]: E0625 19:07:54.179874 2569 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-948b949f9-lffnp_calico-system(64e26d3e-5506-4e69-921b-3b06d3154cdc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-948b949f9-lffnp_calico-system(64e26d3e-5506-4e69-921b-3b06d3154cdc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4e5c4a18bda1829fd9a90bce9a6cada7714a2e297fa9879b3f0bcddfa4e3996c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-948b949f9-lffnp" podUID="64e26d3e-5506-4e69-921b-3b06d3154cdc" Jun 25 19:07:54.185337 containerd[1458]: time="2024-06-25T19:07:54.185264461Z" level=error msg="Failed to destroy network for sandbox \"6c05d4f7f01b27f0430f187198eb9e1fdf67b785fdd709209337cdfcdc79de47\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 19:07:54.185800 containerd[1458]: time="2024-06-25T19:07:54.185762114Z" level=error msg="encountered an error cleaning up failed sandbox \"6c05d4f7f01b27f0430f187198eb9e1fdf67b785fdd709209337cdfcdc79de47\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 19:07:54.185871 containerd[1458]: time="2024-06-25T19:07:54.185840100Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-kxsw8,Uid:2a923228-560b-43c0-8f94-d47d8f47139d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6c05d4f7f01b27f0430f187198eb9e1fdf67b785fdd709209337cdfcdc79de47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 19:07:54.186403 kubelet[2569]: E0625 19:07:54.186086 2569 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c05d4f7f01b27f0430f187198eb9e1fdf67b785fdd709209337cdfcdc79de47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 19:07:54.186403 kubelet[2569]: E0625 19:07:54.186133 2569 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c05d4f7f01b27f0430f187198eb9e1fdf67b785fdd709209337cdfcdc79de47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-kxsw8" Jun 25 19:07:54.186403 kubelet[2569]: E0625 19:07:54.186158 2569 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c05d4f7f01b27f0430f187198eb9e1fdf67b785fdd709209337cdfcdc79de47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-kxsw8" Jun 25 19:07:54.186514 kubelet[2569]: E0625 19:07:54.186216 2569 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-kxsw8_kube-system(2a923228-560b-43c0-8f94-d47d8f47139d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-kxsw8_kube-system(2a923228-560b-43c0-8f94-d47d8f47139d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6c05d4f7f01b27f0430f187198eb9e1fdf67b785fdd709209337cdfcdc79de47\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-kxsw8" podUID="2a923228-560b-43c0-8f94-d47d8f47139d" Jun 25 19:07:54.474140 kubelet[2569]: I0625 19:07:54.474061 2569 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e5c4a18bda1829fd9a90bce9a6cada7714a2e297fa9879b3f0bcddfa4e3996c" Jun 25 19:07:54.476806 containerd[1458]: time="2024-06-25T19:07:54.476454514Z" level=info msg="StopPodSandbox for \"4e5c4a18bda1829fd9a90bce9a6cada7714a2e297fa9879b3f0bcddfa4e3996c\"" Jun 25 19:07:54.478169 containerd[1458]: time="2024-06-25T19:07:54.477655445Z" level=info msg="Ensure that sandbox 4e5c4a18bda1829fd9a90bce9a6cada7714a2e297fa9879b3f0bcddfa4e3996c in task-service has been cleanup successfully" Jun 25 19:07:54.485455 containerd[1458]: time="2024-06-25T19:07:54.479854268Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Jun 25 19:07:54.496355 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e47b070cb38fa78d406e523889a4b2713f6ccbe63fd4df04b2ffa250c8dd8a2b-shm.mount: Deactivated successfully. Jun 25 19:07:54.502065 kubelet[2569]: I0625 19:07:54.500643 2569 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c05d4f7f01b27f0430f187198eb9e1fdf67b785fdd709209337cdfcdc79de47" Jun 25 19:07:54.509973 containerd[1458]: time="2024-06-25T19:07:54.506091341Z" level=info msg="StopPodSandbox for \"6c05d4f7f01b27f0430f187198eb9e1fdf67b785fdd709209337cdfcdc79de47\"" Jun 25 19:07:54.509973 containerd[1458]: time="2024-06-25T19:07:54.506523221Z" level=info msg="Ensure that sandbox 6c05d4f7f01b27f0430f187198eb9e1fdf67b785fdd709209337cdfcdc79de47 in task-service has been cleanup successfully" Jun 25 19:07:54.528137 kubelet[2569]: I0625 19:07:54.528068 2569 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e47b070cb38fa78d406e523889a4b2713f6ccbe63fd4df04b2ffa250c8dd8a2b" Jun 25 19:07:54.532713 containerd[1458]: time="2024-06-25T19:07:54.532635249Z" level=info msg="StopPodSandbox for \"e47b070cb38fa78d406e523889a4b2713f6ccbe63fd4df04b2ffa250c8dd8a2b\"" Jun 25 19:07:54.533609 containerd[1458]: time="2024-06-25T19:07:54.533522141Z" level=info msg="Ensure that sandbox e47b070cb38fa78d406e523889a4b2713f6ccbe63fd4df04b2ffa250c8dd8a2b in task-service has been cleanup successfully" Jun 25 19:07:54.588912 containerd[1458]: time="2024-06-25T19:07:54.588687473Z" level=error msg="StopPodSandbox for \"4e5c4a18bda1829fd9a90bce9a6cada7714a2e297fa9879b3f0bcddfa4e3996c\" failed" error="failed to destroy network for sandbox \"4e5c4a18bda1829fd9a90bce9a6cada7714a2e297fa9879b3f0bcddfa4e3996c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 19:07:54.590278 kubelet[2569]: E0625 19:07:54.590254 2569 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4e5c4a18bda1829fd9a90bce9a6cada7714a2e297fa9879b3f0bcddfa4e3996c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4e5c4a18bda1829fd9a90bce9a6cada7714a2e297fa9879b3f0bcddfa4e3996c" Jun 25 19:07:54.590373 kubelet[2569]: E0625 19:07:54.590311 2569 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4e5c4a18bda1829fd9a90bce9a6cada7714a2e297fa9879b3f0bcddfa4e3996c"} Jun 25 19:07:54.590373 kubelet[2569]: E0625 19:07:54.590355 2569 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"64e26d3e-5506-4e69-921b-3b06d3154cdc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4e5c4a18bda1829fd9a90bce9a6cada7714a2e297fa9879b3f0bcddfa4e3996c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 19:07:54.590477 kubelet[2569]: E0625 19:07:54.590391 2569 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"64e26d3e-5506-4e69-921b-3b06d3154cdc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4e5c4a18bda1829fd9a90bce9a6cada7714a2e297fa9879b3f0bcddfa4e3996c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-948b949f9-lffnp" podUID="64e26d3e-5506-4e69-921b-3b06d3154cdc" Jun 25 19:07:54.590726 containerd[1458]: time="2024-06-25T19:07:54.590681892Z" level=error msg="StopPodSandbox for \"e47b070cb38fa78d406e523889a4b2713f6ccbe63fd4df04b2ffa250c8dd8a2b\" failed" error="failed to destroy network for sandbox \"e47b070cb38fa78d406e523889a4b2713f6ccbe63fd4df04b2ffa250c8dd8a2b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 19:07:54.591092 kubelet[2569]: E0625 19:07:54.591016 2569 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e47b070cb38fa78d406e523889a4b2713f6ccbe63fd4df04b2ffa250c8dd8a2b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e47b070cb38fa78d406e523889a4b2713f6ccbe63fd4df04b2ffa250c8dd8a2b" Jun 25 19:07:54.591092 kubelet[2569]: E0625 19:07:54.591047 2569 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e47b070cb38fa78d406e523889a4b2713f6ccbe63fd4df04b2ffa250c8dd8a2b"} Jun 25 19:07:54.591092 kubelet[2569]: E0625 19:07:54.591085 2569 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b8d477a9-5ba0-42a2-8679-d220a4893fb5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e47b070cb38fa78d406e523889a4b2713f6ccbe63fd4df04b2ffa250c8dd8a2b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 19:07:54.591222 kubelet[2569]: E0625 19:07:54.591114 2569 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b8d477a9-5ba0-42a2-8679-d220a4893fb5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e47b070cb38fa78d406e523889a4b2713f6ccbe63fd4df04b2ffa250c8dd8a2b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-wkwjk" podUID="b8d477a9-5ba0-42a2-8679-d220a4893fb5" Jun 25 19:07:54.601774 containerd[1458]: time="2024-06-25T19:07:54.601714498Z" level=error msg="StopPodSandbox for \"6c05d4f7f01b27f0430f187198eb9e1fdf67b785fdd709209337cdfcdc79de47\" failed" error="failed to destroy network for sandbox \"6c05d4f7f01b27f0430f187198eb9e1fdf67b785fdd709209337cdfcdc79de47\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 19:07:54.601991 kubelet[2569]: E0625 19:07:54.601971 2569 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6c05d4f7f01b27f0430f187198eb9e1fdf67b785fdd709209337cdfcdc79de47\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6c05d4f7f01b27f0430f187198eb9e1fdf67b785fdd709209337cdfcdc79de47" Jun 25 19:07:54.602051 kubelet[2569]: E0625 19:07:54.602010 2569 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6c05d4f7f01b27f0430f187198eb9e1fdf67b785fdd709209337cdfcdc79de47"} Jun 25 19:07:54.602051 kubelet[2569]: E0625 19:07:54.602049 2569 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2a923228-560b-43c0-8f94-d47d8f47139d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6c05d4f7f01b27f0430f187198eb9e1fdf67b785fdd709209337cdfcdc79de47\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 19:07:54.603235 kubelet[2569]: E0625 19:07:54.602084 2569 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2a923228-560b-43c0-8f94-d47d8f47139d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6c05d4f7f01b27f0430f187198eb9e1fdf67b785fdd709209337cdfcdc79de47\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-kxsw8" podUID="2a923228-560b-43c0-8f94-d47d8f47139d" Jun 25 19:07:55.254936 systemd[1]: Created slice kubepods-besteffort-pod1276367b_2bea_4184_b5ed_849c23171592.slice - libcontainer container kubepods-besteffort-pod1276367b_2bea_4184_b5ed_849c23171592.slice. Jun 25 19:07:55.261822 containerd[1458]: time="2024-06-25T19:07:55.261326858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r7zkp,Uid:1276367b-2bea-4184-b5ed-849c23171592,Namespace:calico-system,Attempt:0,}" Jun 25 19:07:55.372805 containerd[1458]: time="2024-06-25T19:07:55.372759321Z" level=error msg="Failed to destroy network for sandbox \"20e15757b4029b2c9f852fcc0c3c9c79c8e7530ee1bb6f567209fe8c2e6cbe9f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 19:07:55.373269 containerd[1458]: time="2024-06-25T19:07:55.373239561Z" level=error msg="encountered an error cleaning up failed sandbox \"20e15757b4029b2c9f852fcc0c3c9c79c8e7530ee1bb6f567209fe8c2e6cbe9f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 19:07:55.373382 containerd[1458]: time="2024-06-25T19:07:55.373356660Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r7zkp,Uid:1276367b-2bea-4184-b5ed-849c23171592,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"20e15757b4029b2c9f852fcc0c3c9c79c8e7530ee1bb6f567209fe8c2e6cbe9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 19:07:55.373670 kubelet[2569]: E0625 19:07:55.373650 2569 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20e15757b4029b2c9f852fcc0c3c9c79c8e7530ee1bb6f567209fe8c2e6cbe9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 19:07:55.373929 kubelet[2569]: E0625 19:07:55.373808 2569 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20e15757b4029b2c9f852fcc0c3c9c79c8e7530ee1bb6f567209fe8c2e6cbe9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r7zkp" Jun 25 19:07:55.373929 kubelet[2569]: E0625 19:07:55.373839 2569 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20e15757b4029b2c9f852fcc0c3c9c79c8e7530ee1bb6f567209fe8c2e6cbe9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r7zkp" Jun 25 19:07:55.373929 kubelet[2569]: E0625 19:07:55.373903 2569 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-r7zkp_calico-system(1276367b-2bea-4184-b5ed-849c23171592)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-r7zkp_calico-system(1276367b-2bea-4184-b5ed-849c23171592)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"20e15757b4029b2c9f852fcc0c3c9c79c8e7530ee1bb6f567209fe8c2e6cbe9f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-r7zkp" podUID="1276367b-2bea-4184-b5ed-849c23171592" Jun 25 19:07:55.378836 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-20e15757b4029b2c9f852fcc0c3c9c79c8e7530ee1bb6f567209fe8c2e6cbe9f-shm.mount: Deactivated successfully. Jun 25 19:07:55.532508 kubelet[2569]: I0625 19:07:55.532210 2569 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20e15757b4029b2c9f852fcc0c3c9c79c8e7530ee1bb6f567209fe8c2e6cbe9f" Jun 25 19:07:55.536186 containerd[1458]: time="2024-06-25T19:07:55.534298632Z" level=info msg="StopPodSandbox for \"20e15757b4029b2c9f852fcc0c3c9c79c8e7530ee1bb6f567209fe8c2e6cbe9f\"" Jun 25 19:07:55.536186 containerd[1458]: time="2024-06-25T19:07:55.534715674Z" level=info msg="Ensure that sandbox 20e15757b4029b2c9f852fcc0c3c9c79c8e7530ee1bb6f567209fe8c2e6cbe9f in task-service has been cleanup successfully" Jun 25 19:07:55.575389 containerd[1458]: time="2024-06-25T19:07:55.575226991Z" level=error msg="StopPodSandbox for \"20e15757b4029b2c9f852fcc0c3c9c79c8e7530ee1bb6f567209fe8c2e6cbe9f\" failed" error="failed to destroy network for sandbox \"20e15757b4029b2c9f852fcc0c3c9c79c8e7530ee1bb6f567209fe8c2e6cbe9f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 19:07:55.576018 kubelet[2569]: E0625 19:07:55.575680 2569 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"20e15757b4029b2c9f852fcc0c3c9c79c8e7530ee1bb6f567209fe8c2e6cbe9f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="20e15757b4029b2c9f852fcc0c3c9c79c8e7530ee1bb6f567209fe8c2e6cbe9f" Jun 25 19:07:55.576018 kubelet[2569]: E0625 19:07:55.575824 2569 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"20e15757b4029b2c9f852fcc0c3c9c79c8e7530ee1bb6f567209fe8c2e6cbe9f"} Jun 25 19:07:55.576018 kubelet[2569]: E0625 19:07:55.575922 2569 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1276367b-2bea-4184-b5ed-849c23171592\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"20e15757b4029b2c9f852fcc0c3c9c79c8e7530ee1bb6f567209fe8c2e6cbe9f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 19:07:55.576018 kubelet[2569]: E0625 19:07:55.575997 2569 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1276367b-2bea-4184-b5ed-849c23171592\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"20e15757b4029b2c9f852fcc0c3c9c79c8e7530ee1bb6f567209fe8c2e6cbe9f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-r7zkp" podUID="1276367b-2bea-4184-b5ed-849c23171592" Jun 25 19:08:02.858057 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2029982475.mount: Deactivated successfully. Jun 25 19:08:03.538482 containerd[1458]: time="2024-06-25T19:08:03.538291082Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 19:08:03.541788 containerd[1458]: time="2024-06-25T19:08:03.541633049Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=115238750" Jun 25 19:08:03.545717 containerd[1458]: time="2024-06-25T19:08:03.545398230Z" level=info msg="ImageCreate event name:\"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 19:08:03.557640 containerd[1458]: time="2024-06-25T19:08:03.557444989Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 19:08:03.562176 containerd[1458]: time="2024-06-25T19:08:03.562112292Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"115238612\" in 9.077306931s" Jun 25 19:08:03.562395 containerd[1458]: time="2024-06-25T19:08:03.562352813Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\"" Jun 25 19:08:03.632541 containerd[1458]: time="2024-06-25T19:08:03.632503799Z" level=info msg="CreateContainer within sandbox \"a9660890b908b8f4d1b28a29aa794c5336a03905530c6d8ed32854eb6bd0be0d\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jun 25 19:08:03.662257 containerd[1458]: time="2024-06-25T19:08:03.662223068Z" level=info msg="CreateContainer within sandbox \"a9660890b908b8f4d1b28a29aa794c5336a03905530c6d8ed32854eb6bd0be0d\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"8152887f0d5cd01cc8dd83a4b30be301488e050f83273c041994ac717c788a06\"" Jun 25 19:08:03.662882 containerd[1458]: time="2024-06-25T19:08:03.662861495Z" level=info msg="StartContainer for \"8152887f0d5cd01cc8dd83a4b30be301488e050f83273c041994ac717c788a06\"" Jun 25 19:08:03.702780 systemd[1]: Started cri-containerd-8152887f0d5cd01cc8dd83a4b30be301488e050f83273c041994ac717c788a06.scope - libcontainer container 8152887f0d5cd01cc8dd83a4b30be301488e050f83273c041994ac717c788a06. Jun 25 19:08:03.744766 containerd[1458]: time="2024-06-25T19:08:03.744330344Z" level=info msg="StartContainer for \"8152887f0d5cd01cc8dd83a4b30be301488e050f83273c041994ac717c788a06\" returns successfully" Jun 25 19:08:03.846081 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jun 25 19:08:03.846325 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jun 25 19:08:04.689014 kubelet[2569]: I0625 19:08:04.688888 2569 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-x8sk5" podStartSLOduration=2.5670719220000002 podCreationTimestamp="2024-06-25 19:07:44 +0000 UTC" firstStartedPulling="2024-06-25 19:07:45.445520623 +0000 UTC m=+27.493429794" lastFinishedPulling="2024-06-25 19:08:03.567148976 +0000 UTC m=+45.615058147" observedRunningTime="2024-06-25 19:08:04.682983225 +0000 UTC m=+46.730892396" watchObservedRunningTime="2024-06-25 19:08:04.688700275 +0000 UTC m=+46.736609506" Jun 25 19:08:07.245323 containerd[1458]: time="2024-06-25T19:08:07.243219446Z" level=info msg="StopPodSandbox for \"e47b070cb38fa78d406e523889a4b2713f6ccbe63fd4df04b2ffa250c8dd8a2b\"" Jun 25 19:08:07.245323 containerd[1458]: time="2024-06-25T19:08:07.243785166Z" level=info msg="StopPodSandbox for \"20e15757b4029b2c9f852fcc0c3c9c79c8e7530ee1bb6f567209fe8c2e6cbe9f\"" Jun 25 19:08:07.247984 containerd[1458]: time="2024-06-25T19:08:07.247516024Z" level=info msg="StopPodSandbox for \"4e5c4a18bda1829fd9a90bce9a6cada7714a2e297fa9879b3f0bcddfa4e3996c\"" Jun 25 19:08:07.749974 containerd[1458]: 2024-06-25 19:08:07.398 [INFO][4083] k8s.go 608: Cleaning up netns ContainerID="4e5c4a18bda1829fd9a90bce9a6cada7714a2e297fa9879b3f0bcddfa4e3996c" Jun 25 19:08:07.749974 containerd[1458]: 2024-06-25 19:08:07.398 [INFO][4083] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="4e5c4a18bda1829fd9a90bce9a6cada7714a2e297fa9879b3f0bcddfa4e3996c" iface="eth0" netns="/var/run/netns/cni-ac6fc6c0-626e-9bda-efc1-3dc821edc60e" Jun 25 19:08:07.749974 containerd[1458]: 2024-06-25 19:08:07.399 [INFO][4083] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="4e5c4a18bda1829fd9a90bce9a6cada7714a2e297fa9879b3f0bcddfa4e3996c" iface="eth0" netns="/var/run/netns/cni-ac6fc6c0-626e-9bda-efc1-3dc821edc60e" Jun 25 19:08:07.749974 containerd[1458]: 2024-06-25 19:08:07.400 [INFO][4083] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="4e5c4a18bda1829fd9a90bce9a6cada7714a2e297fa9879b3f0bcddfa4e3996c" iface="eth0" netns="/var/run/netns/cni-ac6fc6c0-626e-9bda-efc1-3dc821edc60e" Jun 25 19:08:07.749974 containerd[1458]: 2024-06-25 19:08:07.400 [INFO][4083] k8s.go 615: Releasing IP address(es) ContainerID="4e5c4a18bda1829fd9a90bce9a6cada7714a2e297fa9879b3f0bcddfa4e3996c" Jun 25 19:08:07.749974 containerd[1458]: 2024-06-25 19:08:07.400 [INFO][4083] utils.go 188: Calico CNI releasing IP address ContainerID="4e5c4a18bda1829fd9a90bce9a6cada7714a2e297fa9879b3f0bcddfa4e3996c" Jun 25 19:08:07.749974 containerd[1458]: 2024-06-25 19:08:07.721 [INFO][4105] ipam_plugin.go 411: Releasing address using handleID ContainerID="4e5c4a18bda1829fd9a90bce9a6cada7714a2e297fa9879b3f0bcddfa4e3996c" HandleID="k8s-pod-network.4e5c4a18bda1829fd9a90bce9a6cada7714a2e297fa9879b3f0bcddfa4e3996c" Workload="ci--4012--0--0--8--d63f105dc7.novalocal-k8s-calico--kube--controllers--948b949f9--lffnp-eth0" Jun 25 19:08:07.749974 containerd[1458]: 2024-06-25 19:08:07.722 [INFO][4105] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 19:08:07.749974 containerd[1458]: 2024-06-25 19:08:07.723 [INFO][4105] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 19:08:07.749974 containerd[1458]: 2024-06-25 19:08:07.740 [WARNING][4105] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="4e5c4a18bda1829fd9a90bce9a6cada7714a2e297fa9879b3f0bcddfa4e3996c" HandleID="k8s-pod-network.4e5c4a18bda1829fd9a90bce9a6cada7714a2e297fa9879b3f0bcddfa4e3996c" Workload="ci--4012--0--0--8--d63f105dc7.novalocal-k8s-calico--kube--controllers--948b949f9--lffnp-eth0" Jun 25 19:08:07.749974 containerd[1458]: 2024-06-25 19:08:07.740 [INFO][4105] ipam_plugin.go 439: Releasing address using workloadID ContainerID="4e5c4a18bda1829fd9a90bce9a6cada7714a2e297fa9879b3f0bcddfa4e3996c" HandleID="k8s-pod-network.4e5c4a18bda1829fd9a90bce9a6cada7714a2e297fa9879b3f0bcddfa4e3996c" Workload="ci--4012--0--0--8--d63f105dc7.novalocal-k8s-calico--kube--controllers--948b949f9--lffnp-eth0" Jun 25 19:08:07.749974 containerd[1458]: 2024-06-25 19:08:07.743 [INFO][4105] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 19:08:07.749974 containerd[1458]: 2024-06-25 19:08:07.747 [INFO][4083] k8s.go 621: Teardown processing complete. ContainerID="4e5c4a18bda1829fd9a90bce9a6cada7714a2e297fa9879b3f0bcddfa4e3996c" Jun 25 19:08:07.753338 containerd[1458]: time="2024-06-25T19:08:07.752829064Z" level=info msg="TearDown network for sandbox \"4e5c4a18bda1829fd9a90bce9a6cada7714a2e297fa9879b3f0bcddfa4e3996c\" successfully" Jun 25 19:08:07.753338 containerd[1458]: time="2024-06-25T19:08:07.752891692Z" level=info msg="StopPodSandbox for \"4e5c4a18bda1829fd9a90bce9a6cada7714a2e297fa9879b3f0bcddfa4e3996c\" returns successfully" Jun 25 19:08:07.754042 systemd[1]: run-netns-cni\x2dac6fc6c0\x2d626e\x2d9bda\x2defc1\x2d3dc821edc60e.mount: Deactivated successfully. Jun 25 19:08:07.756766 containerd[1458]: time="2024-06-25T19:08:07.756350708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-948b949f9-lffnp,Uid:64e26d3e-5506-4e69-921b-3b06d3154cdc,Namespace:calico-system,Attempt:1,}" Jun 25 19:08:07.772878 containerd[1458]: 2024-06-25 19:08:07.388 [INFO][4086] k8s.go 608: Cleaning up netns ContainerID="e47b070cb38fa78d406e523889a4b2713f6ccbe63fd4df04b2ffa250c8dd8a2b" Jun 25 19:08:07.772878 containerd[1458]: 2024-06-25 19:08:07.388 [INFO][4086] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="e47b070cb38fa78d406e523889a4b2713f6ccbe63fd4df04b2ffa250c8dd8a2b" iface="eth0" netns="/var/run/netns/cni-f92aae25-89f5-33cc-59d3-d543c69a527c" Jun 25 19:08:07.772878 containerd[1458]: 2024-06-25 19:08:07.389 [INFO][4086] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="e47b070cb38fa78d406e523889a4b2713f6ccbe63fd4df04b2ffa250c8dd8a2b" iface="eth0" netns="/var/run/netns/cni-f92aae25-89f5-33cc-59d3-d543c69a527c" Jun 25 19:08:07.772878 containerd[1458]: 2024-06-25 19:08:07.390 [INFO][4086] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="e47b070cb38fa78d406e523889a4b2713f6ccbe63fd4df04b2ffa250c8dd8a2b" iface="eth0" netns="/var/run/netns/cni-f92aae25-89f5-33cc-59d3-d543c69a527c" Jun 25 19:08:07.772878 containerd[1458]: 2024-06-25 19:08:07.390 [INFO][4086] k8s.go 615: Releasing IP address(es) ContainerID="e47b070cb38fa78d406e523889a4b2713f6ccbe63fd4df04b2ffa250c8dd8a2b" Jun 25 19:08:07.772878 containerd[1458]: 2024-06-25 19:08:07.390 [INFO][4086] utils.go 188: Calico CNI releasing IP address ContainerID="e47b070cb38fa78d406e523889a4b2713f6ccbe63fd4df04b2ffa250c8dd8a2b" Jun 25 19:08:07.772878 containerd[1458]: 2024-06-25 19:08:07.721 [INFO][4103] ipam_plugin.go 411: Releasing address using handleID ContainerID="e47b070cb38fa78d406e523889a4b2713f6ccbe63fd4df04b2ffa250c8dd8a2b" HandleID="k8s-pod-network.e47b070cb38fa78d406e523889a4b2713f6ccbe63fd4df04b2ffa250c8dd8a2b" Workload="ci--4012--0--0--8--d63f105dc7.novalocal-k8s-coredns--5dd5756b68--wkwjk-eth0" Jun 25 19:08:07.772878 containerd[1458]: 2024-06-25 19:08:07.722 [INFO][4103] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 19:08:07.772878 containerd[1458]: 2024-06-25 19:08:07.743 [INFO][4103] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 19:08:07.772878 containerd[1458]: 2024-06-25 19:08:07.755 [WARNING][4103] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="e47b070cb38fa78d406e523889a4b2713f6ccbe63fd4df04b2ffa250c8dd8a2b" HandleID="k8s-pod-network.e47b070cb38fa78d406e523889a4b2713f6ccbe63fd4df04b2ffa250c8dd8a2b" Workload="ci--4012--0--0--8--d63f105dc7.novalocal-k8s-coredns--5dd5756b68--wkwjk-eth0" Jun 25 19:08:07.772878 containerd[1458]: 2024-06-25 19:08:07.755 [INFO][4103] ipam_plugin.go 439: Releasing address using workloadID ContainerID="e47b070cb38fa78d406e523889a4b2713f6ccbe63fd4df04b2ffa250c8dd8a2b" HandleID="k8s-pod-network.e47b070cb38fa78d406e523889a4b2713f6ccbe63fd4df04b2ffa250c8dd8a2b" Workload="ci--4012--0--0--8--d63f105dc7.novalocal-k8s-coredns--5dd5756b68--wkwjk-eth0" Jun 25 19:08:07.772878 containerd[1458]: 2024-06-25 19:08:07.758 [INFO][4103] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 19:08:07.772878 containerd[1458]: 2024-06-25 19:08:07.760 [INFO][4086] k8s.go 621: Teardown processing complete. ContainerID="e47b070cb38fa78d406e523889a4b2713f6ccbe63fd4df04b2ffa250c8dd8a2b" Jun 25 19:08:07.774190 containerd[1458]: time="2024-06-25T19:08:07.773923802Z" level=info msg="TearDown network for sandbox \"e47b070cb38fa78d406e523889a4b2713f6ccbe63fd4df04b2ffa250c8dd8a2b\" successfully" Jun 25 19:08:07.774190 containerd[1458]: time="2024-06-25T19:08:07.773955742Z" level=info msg="StopPodSandbox for \"e47b070cb38fa78d406e523889a4b2713f6ccbe63fd4df04b2ffa250c8dd8a2b\" returns successfully" Jun 25 19:08:07.781943 containerd[1458]: time="2024-06-25T19:08:07.777272251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-wkwjk,Uid:b8d477a9-5ba0-42a2-8679-d220a4893fb5,Namespace:kube-system,Attempt:1,}" Jun 25 19:08:07.778058 systemd[1]: run-netns-cni\x2df92aae25\x2d89f5\x2d33cc\x2d59d3\x2dd543c69a527c.mount: Deactivated successfully. Jun 25 19:08:07.784705 containerd[1458]: 2024-06-25 19:08:07.392 [INFO][4084] k8s.go 608: Cleaning up netns ContainerID="20e15757b4029b2c9f852fcc0c3c9c79c8e7530ee1bb6f567209fe8c2e6cbe9f" Jun 25 19:08:07.784705 containerd[1458]: 2024-06-25 19:08:07.392 [INFO][4084] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="20e15757b4029b2c9f852fcc0c3c9c79c8e7530ee1bb6f567209fe8c2e6cbe9f" iface="eth0" netns="/var/run/netns/cni-051cfa95-5c51-0950-917b-a834e187192c" Jun 25 19:08:07.784705 containerd[1458]: 2024-06-25 19:08:07.393 [INFO][4084] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="20e15757b4029b2c9f852fcc0c3c9c79c8e7530ee1bb6f567209fe8c2e6cbe9f" iface="eth0" netns="/var/run/netns/cni-051cfa95-5c51-0950-917b-a834e187192c" Jun 25 19:08:07.784705 containerd[1458]: 2024-06-25 19:08:07.393 [INFO][4084] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="20e15757b4029b2c9f852fcc0c3c9c79c8e7530ee1bb6f567209fe8c2e6cbe9f" iface="eth0" netns="/var/run/netns/cni-051cfa95-5c51-0950-917b-a834e187192c" Jun 25 19:08:07.784705 containerd[1458]: 2024-06-25 19:08:07.393 [INFO][4084] k8s.go 615: Releasing IP address(es) ContainerID="20e15757b4029b2c9f852fcc0c3c9c79c8e7530ee1bb6f567209fe8c2e6cbe9f" Jun 25 19:08:07.784705 containerd[1458]: 2024-06-25 19:08:07.394 [INFO][4084] utils.go 188: Calico CNI releasing IP address ContainerID="20e15757b4029b2c9f852fcc0c3c9c79c8e7530ee1bb6f567209fe8c2e6cbe9f" Jun 25 19:08:07.784705 containerd[1458]: 2024-06-25 19:08:07.722 [INFO][4104] ipam_plugin.go 411: Releasing address using handleID ContainerID="20e15757b4029b2c9f852fcc0c3c9c79c8e7530ee1bb6f567209fe8c2e6cbe9f" HandleID="k8s-pod-network.20e15757b4029b2c9f852fcc0c3c9c79c8e7530ee1bb6f567209fe8c2e6cbe9f" Workload="ci--4012--0--0--8--d63f105dc7.novalocal-k8s-csi--node--driver--r7zkp-eth0" Jun 25 19:08:07.784705 containerd[1458]: 2024-06-25 19:08:07.722 [INFO][4104] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 19:08:07.784705 containerd[1458]: 2024-06-25 19:08:07.759 [INFO][4104] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 19:08:07.784705 containerd[1458]: 2024-06-25 19:08:07.773 [WARNING][4104] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="20e15757b4029b2c9f852fcc0c3c9c79c8e7530ee1bb6f567209fe8c2e6cbe9f" HandleID="k8s-pod-network.20e15757b4029b2c9f852fcc0c3c9c79c8e7530ee1bb6f567209fe8c2e6cbe9f" Workload="ci--4012--0--0--8--d63f105dc7.novalocal-k8s-csi--node--driver--r7zkp-eth0" Jun 25 19:08:07.784705 containerd[1458]: 2024-06-25 19:08:07.774 [INFO][4104] ipam_plugin.go 439: Releasing address using workloadID ContainerID="20e15757b4029b2c9f852fcc0c3c9c79c8e7530ee1bb6f567209fe8c2e6cbe9f" HandleID="k8s-pod-network.20e15757b4029b2c9f852fcc0c3c9c79c8e7530ee1bb6f567209fe8c2e6cbe9f" Workload="ci--4012--0--0--8--d63f105dc7.novalocal-k8s-csi--node--driver--r7zkp-eth0" Jun 25 19:08:07.784705 containerd[1458]: 2024-06-25 19:08:07.779 [INFO][4104] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 19:08:07.784705 containerd[1458]: 2024-06-25 19:08:07.782 [INFO][4084] k8s.go 621: Teardown processing complete. ContainerID="20e15757b4029b2c9f852fcc0c3c9c79c8e7530ee1bb6f567209fe8c2e6cbe9f" Jun 25 19:08:07.787710 systemd[1]: run-netns-cni\x2d051cfa95\x2d5c51\x2d0950\x2d917b\x2da834e187192c.mount: Deactivated successfully. Jun 25 19:08:07.791330 containerd[1458]: time="2024-06-25T19:08:07.787950637Z" level=info msg="TearDown network for sandbox \"20e15757b4029b2c9f852fcc0c3c9c79c8e7530ee1bb6f567209fe8c2e6cbe9f\" successfully" Jun 25 19:08:07.791330 containerd[1458]: time="2024-06-25T19:08:07.787978178Z" level=info msg="StopPodSandbox for \"20e15757b4029b2c9f852fcc0c3c9c79c8e7530ee1bb6f567209fe8c2e6cbe9f\" returns successfully" Jun 25 19:08:07.792483 containerd[1458]: time="2024-06-25T19:08:07.791524288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r7zkp,Uid:1276367b-2bea-4184-b5ed-849c23171592,Namespace:calico-system,Attempt:1,}" Jun 25 19:08:08.040112 systemd-networkd[1361]: cali547a8d41047: Link UP Jun 25 19:08:08.040315 systemd-networkd[1361]: cali547a8d41047: Gained carrier Jun 25 19:08:08.064897 containerd[1458]: 2024-06-25 19:08:07.868 [INFO][4144] utils.go 100: File /var/lib/calico/mtu does not exist Jun 25 19:08:08.064897 containerd[1458]: 2024-06-25 19:08:07.901 [INFO][4144] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4012--0--0--8--d63f105dc7.novalocal-k8s-calico--kube--controllers--948b949f9--lffnp-eth0 calico-kube-controllers-948b949f9- calico-system 64e26d3e-5506-4e69-921b-3b06d3154cdc 774 0 2024-06-25 19:07:38 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:948b949f9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4012-0-0-8-d63f105dc7.novalocal calico-kube-controllers-948b949f9-lffnp eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali547a8d41047 [] []}} ContainerID="50dea36c846fadf2031446699ec3963cb85f66bd56ec7f129a950f77a25d444c" Namespace="calico-system" Pod="calico-kube-controllers-948b949f9-lffnp" WorkloadEndpoint="ci--4012--0--0--8--d63f105dc7.novalocal-k8s-calico--kube--controllers--948b949f9--lffnp-" Jun 25 19:08:08.064897 containerd[1458]: 2024-06-25 19:08:07.902 [INFO][4144] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="50dea36c846fadf2031446699ec3963cb85f66bd56ec7f129a950f77a25d444c" Namespace="calico-system" Pod="calico-kube-controllers-948b949f9-lffnp" WorkloadEndpoint="ci--4012--0--0--8--d63f105dc7.novalocal-k8s-calico--kube--controllers--948b949f9--lffnp-eth0" Jun 25 19:08:08.064897 containerd[1458]: 2024-06-25 19:08:07.958 [INFO][4179] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="50dea36c846fadf2031446699ec3963cb85f66bd56ec7f129a950f77a25d444c" HandleID="k8s-pod-network.50dea36c846fadf2031446699ec3963cb85f66bd56ec7f129a950f77a25d444c" Workload="ci--4012--0--0--8--d63f105dc7.novalocal-k8s-calico--kube--controllers--948b949f9--lffnp-eth0" Jun 25 19:08:08.064897 containerd[1458]: 2024-06-25 19:08:07.975 [INFO][4179] ipam_plugin.go 264: Auto assigning IP ContainerID="50dea36c846fadf2031446699ec3963cb85f66bd56ec7f129a950f77a25d444c" HandleID="k8s-pod-network.50dea36c846fadf2031446699ec3963cb85f66bd56ec7f129a950f77a25d444c" Workload="ci--4012--0--0--8--d63f105dc7.novalocal-k8s-calico--kube--controllers--948b949f9--lffnp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00030af00), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4012-0-0-8-d63f105dc7.novalocal", "pod":"calico-kube-controllers-948b949f9-lffnp", "timestamp":"2024-06-25 19:08:07.958780797 +0000 UTC"}, Hostname:"ci-4012-0-0-8-d63f105dc7.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 19:08:08.064897 containerd[1458]: 2024-06-25 19:08:07.975 [INFO][4179] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 19:08:08.064897 containerd[1458]: 2024-06-25 19:08:07.975 [INFO][4179] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 19:08:08.064897 containerd[1458]: 2024-06-25 19:08:07.975 [INFO][4179] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4012-0-0-8-d63f105dc7.novalocal' Jun 25 19:08:08.064897 containerd[1458]: 2024-06-25 19:08:07.977 [INFO][4179] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.50dea36c846fadf2031446699ec3963cb85f66bd56ec7f129a950f77a25d444c" host="ci-4012-0-0-8-d63f105dc7.novalocal" Jun 25 19:08:08.064897 containerd[1458]: 2024-06-25 19:08:07.997 [INFO][4179] ipam.go 372: Looking up existing affinities for host host="ci-4012-0-0-8-d63f105dc7.novalocal" Jun 25 19:08:08.064897 containerd[1458]: 2024-06-25 19:08:08.004 [INFO][4179] ipam.go 489: Trying affinity for 192.168.85.64/26 host="ci-4012-0-0-8-d63f105dc7.novalocal" Jun 25 19:08:08.064897 containerd[1458]: 2024-06-25 19:08:08.006 [INFO][4179] ipam.go 155: Attempting to load block cidr=192.168.85.64/26 host="ci-4012-0-0-8-d63f105dc7.novalocal" Jun 25 19:08:08.064897 containerd[1458]: 2024-06-25 19:08:08.009 [INFO][4179] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.85.64/26 host="ci-4012-0-0-8-d63f105dc7.novalocal" Jun 25 19:08:08.064897 containerd[1458]: 2024-06-25 19:08:08.009 [INFO][4179] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.85.64/26 handle="k8s-pod-network.50dea36c846fadf2031446699ec3963cb85f66bd56ec7f129a950f77a25d444c" host="ci-4012-0-0-8-d63f105dc7.novalocal" Jun 25 19:08:08.064897 containerd[1458]: 2024-06-25 19:08:08.010 [INFO][4179] ipam.go 1685: Creating new handle: k8s-pod-network.50dea36c846fadf2031446699ec3963cb85f66bd56ec7f129a950f77a25d444c Jun 25 19:08:08.064897 containerd[1458]: 2024-06-25 19:08:08.015 [INFO][4179] ipam.go 1203: Writing block in order to claim IPs block=192.168.85.64/26 handle="k8s-pod-network.50dea36c846fadf2031446699ec3963cb85f66bd56ec7f129a950f77a25d444c" host="ci-4012-0-0-8-d63f105dc7.novalocal" Jun 25 19:08:08.064897 containerd[1458]: 2024-06-25 19:08:08.020 [INFO][4179] ipam.go 1216: Successfully claimed IPs: [192.168.85.65/26] block=192.168.85.64/26 handle="k8s-pod-network.50dea36c846fadf2031446699ec3963cb85f66bd56ec7f129a950f77a25d444c" host="ci-4012-0-0-8-d63f105dc7.novalocal" Jun 25 19:08:08.064897 containerd[1458]: 2024-06-25 19:08:08.020 [INFO][4179] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.85.65/26] handle="k8s-pod-network.50dea36c846fadf2031446699ec3963cb85f66bd56ec7f129a950f77a25d444c" host="ci-4012-0-0-8-d63f105dc7.novalocal" Jun 25 19:08:08.064897 containerd[1458]: 2024-06-25 19:08:08.020 [INFO][4179] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 19:08:08.064897 containerd[1458]: 2024-06-25 19:08:08.020 [INFO][4179] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.85.65/26] IPv6=[] ContainerID="50dea36c846fadf2031446699ec3963cb85f66bd56ec7f129a950f77a25d444c" HandleID="k8s-pod-network.50dea36c846fadf2031446699ec3963cb85f66bd56ec7f129a950f77a25d444c" Workload="ci--4012--0--0--8--d63f105dc7.novalocal-k8s-calico--kube--controllers--948b949f9--lffnp-eth0" Jun 25 19:08:08.066427 containerd[1458]: 2024-06-25 19:08:08.022 [INFO][4144] k8s.go 386: Populated endpoint ContainerID="50dea36c846fadf2031446699ec3963cb85f66bd56ec7f129a950f77a25d444c" Namespace="calico-system" Pod="calico-kube-controllers-948b949f9-lffnp" WorkloadEndpoint="ci--4012--0--0--8--d63f105dc7.novalocal-k8s-calico--kube--controllers--948b949f9--lffnp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012--0--0--8--d63f105dc7.novalocal-k8s-calico--kube--controllers--948b949f9--lffnp-eth0", GenerateName:"calico-kube-controllers-948b949f9-", Namespace:"calico-system", SelfLink:"", UID:"64e26d3e-5506-4e69-921b-3b06d3154cdc", ResourceVersion:"774", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 19, 7, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"948b949f9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012-0-0-8-d63f105dc7.novalocal", ContainerID:"", Pod:"calico-kube-controllers-948b949f9-lffnp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.85.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali547a8d41047", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 19:08:08.066427 containerd[1458]: 2024-06-25 19:08:08.022 [INFO][4144] k8s.go 387: Calico CNI using IPs: [192.168.85.65/32] ContainerID="50dea36c846fadf2031446699ec3963cb85f66bd56ec7f129a950f77a25d444c" Namespace="calico-system" Pod="calico-kube-controllers-948b949f9-lffnp" WorkloadEndpoint="ci--4012--0--0--8--d63f105dc7.novalocal-k8s-calico--kube--controllers--948b949f9--lffnp-eth0" Jun 25 19:08:08.066427 containerd[1458]: 2024-06-25 19:08:08.022 [INFO][4144] dataplane_linux.go 68: Setting the host side veth name to cali547a8d41047 ContainerID="50dea36c846fadf2031446699ec3963cb85f66bd56ec7f129a950f77a25d444c" Namespace="calico-system" Pod="calico-kube-controllers-948b949f9-lffnp" WorkloadEndpoint="ci--4012--0--0--8--d63f105dc7.novalocal-k8s-calico--kube--controllers--948b949f9--lffnp-eth0" Jun 25 19:08:08.066427 containerd[1458]: 2024-06-25 19:08:08.033 [INFO][4144] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="50dea36c846fadf2031446699ec3963cb85f66bd56ec7f129a950f77a25d444c" Namespace="calico-system" Pod="calico-kube-controllers-948b949f9-lffnp" WorkloadEndpoint="ci--4012--0--0--8--d63f105dc7.novalocal-k8s-calico--kube--controllers--948b949f9--lffnp-eth0" Jun 25 19:08:08.066427 containerd[1458]: 2024-06-25 19:08:08.038 [INFO][4144] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="50dea36c846fadf2031446699ec3963cb85f66bd56ec7f129a950f77a25d444c" Namespace="calico-system" Pod="calico-kube-controllers-948b949f9-lffnp" WorkloadEndpoint="ci--4012--0--0--8--d63f105dc7.novalocal-k8s-calico--kube--controllers--948b949f9--lffnp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012--0--0--8--d63f105dc7.novalocal-k8s-calico--kube--controllers--948b949f9--lffnp-eth0", GenerateName:"calico-kube-controllers-948b949f9-", Namespace:"calico-system", SelfLink:"", UID:"64e26d3e-5506-4e69-921b-3b06d3154cdc", ResourceVersion:"774", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 19, 7, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"948b949f9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012-0-0-8-d63f105dc7.novalocal", ContainerID:"50dea36c846fadf2031446699ec3963cb85f66bd56ec7f129a950f77a25d444c", Pod:"calico-kube-controllers-948b949f9-lffnp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.85.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali547a8d41047", MAC:"ee:db:28:eb:ed:61", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 19:08:08.066427 containerd[1458]: 2024-06-25 19:08:08.059 [INFO][4144] k8s.go 500: Wrote updated endpoint to datastore ContainerID="50dea36c846fadf2031446699ec3963cb85f66bd56ec7f129a950f77a25d444c" Namespace="calico-system" Pod="calico-kube-controllers-948b949f9-lffnp" WorkloadEndpoint="ci--4012--0--0--8--d63f105dc7.novalocal-k8s-calico--kube--controllers--948b949f9--lffnp-eth0" Jun 25 19:08:08.106347 systemd-networkd[1361]: cali84283737640: Link UP Jun 25 19:08:08.107898 systemd-networkd[1361]: cali84283737640: Gained carrier Jun 25 19:08:08.134862 containerd[1458]: 2024-06-25 19:08:07.897 [INFO][4165] utils.go 100: File /var/lib/calico/mtu does not exist Jun 25 19:08:08.134862 containerd[1458]: 2024-06-25 19:08:07.921 [INFO][4165] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4012--0--0--8--d63f105dc7.novalocal-k8s-csi--node--driver--r7zkp-eth0 csi-node-driver- calico-system 1276367b-2bea-4184-b5ed-849c23171592 773 0 2024-06-25 19:07:37 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7d7f6c786c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ci-4012-0-0-8-d63f105dc7.novalocal csi-node-driver-r7zkp eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali84283737640 [] []}} ContainerID="33dddd0be8dc1f52683b792fb295acbd261309ec15ed07830594ad5c7d73340c" Namespace="calico-system" Pod="csi-node-driver-r7zkp" WorkloadEndpoint="ci--4012--0--0--8--d63f105dc7.novalocal-k8s-csi--node--driver--r7zkp-" Jun 25 19:08:08.134862 containerd[1458]: 2024-06-25 19:08:07.921 [INFO][4165] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="33dddd0be8dc1f52683b792fb295acbd261309ec15ed07830594ad5c7d73340c" Namespace="calico-system" Pod="csi-node-driver-r7zkp" WorkloadEndpoint="ci--4012--0--0--8--d63f105dc7.novalocal-k8s-csi--node--driver--r7zkp-eth0" Jun 25 19:08:08.134862 containerd[1458]: 2024-06-25 19:08:07.988 [INFO][4183] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="33dddd0be8dc1f52683b792fb295acbd261309ec15ed07830594ad5c7d73340c" HandleID="k8s-pod-network.33dddd0be8dc1f52683b792fb295acbd261309ec15ed07830594ad5c7d73340c" Workload="ci--4012--0--0--8--d63f105dc7.novalocal-k8s-csi--node--driver--r7zkp-eth0" Jun 25 19:08:08.134862 containerd[1458]: 2024-06-25 19:08:08.000 [INFO][4183] ipam_plugin.go 264: Auto assigning IP ContainerID="33dddd0be8dc1f52683b792fb295acbd261309ec15ed07830594ad5c7d73340c" HandleID="k8s-pod-network.33dddd0be8dc1f52683b792fb295acbd261309ec15ed07830594ad5c7d73340c" Workload="ci--4012--0--0--8--d63f105dc7.novalocal-k8s-csi--node--driver--r7zkp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000585d10), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4012-0-0-8-d63f105dc7.novalocal", "pod":"csi-node-driver-r7zkp", "timestamp":"2024-06-25 19:08:07.988452209 +0000 UTC"}, Hostname:"ci-4012-0-0-8-d63f105dc7.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 19:08:08.134862 containerd[1458]: 2024-06-25 19:08:08.000 [INFO][4183] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 19:08:08.134862 containerd[1458]: 2024-06-25 19:08:08.020 [INFO][4183] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 19:08:08.134862 containerd[1458]: 2024-06-25 19:08:08.020 [INFO][4183] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4012-0-0-8-d63f105dc7.novalocal' Jun 25 19:08:08.134862 containerd[1458]: 2024-06-25 19:08:08.024 [INFO][4183] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.33dddd0be8dc1f52683b792fb295acbd261309ec15ed07830594ad5c7d73340c" host="ci-4012-0-0-8-d63f105dc7.novalocal" Jun 25 19:08:08.134862 containerd[1458]: 2024-06-25 19:08:08.035 [INFO][4183] ipam.go 372: Looking up existing affinities for host host="ci-4012-0-0-8-d63f105dc7.novalocal" Jun 25 19:08:08.134862 containerd[1458]: 2024-06-25 19:08:08.059 [INFO][4183] ipam.go 489: Trying affinity for 192.168.85.64/26 host="ci-4012-0-0-8-d63f105dc7.novalocal" Jun 25 19:08:08.134862 containerd[1458]: 2024-06-25 19:08:08.069 [INFO][4183] ipam.go 155: Attempting to load block cidr=192.168.85.64/26 host="ci-4012-0-0-8-d63f105dc7.novalocal" Jun 25 19:08:08.134862 containerd[1458]: 2024-06-25 19:08:08.074 [INFO][4183] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.85.64/26 host="ci-4012-0-0-8-d63f105dc7.novalocal" Jun 25 19:08:08.134862 containerd[1458]: 2024-06-25 19:08:08.074 [INFO][4183] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.85.64/26 handle="k8s-pod-network.33dddd0be8dc1f52683b792fb295acbd261309ec15ed07830594ad5c7d73340c" host="ci-4012-0-0-8-d63f105dc7.novalocal" Jun 25 19:08:08.134862 containerd[1458]: 2024-06-25 19:08:08.077 [INFO][4183] ipam.go 1685: Creating new handle: k8s-pod-network.33dddd0be8dc1f52683b792fb295acbd261309ec15ed07830594ad5c7d73340c Jun 25 19:08:08.134862 containerd[1458]: 2024-06-25 19:08:08.084 [INFO][4183] ipam.go 1203: Writing block in order to claim IPs block=192.168.85.64/26 handle="k8s-pod-network.33dddd0be8dc1f52683b792fb295acbd261309ec15ed07830594ad5c7d73340c" host="ci-4012-0-0-8-d63f105dc7.novalocal" Jun 25 19:08:08.134862 containerd[1458]: 2024-06-25 19:08:08.098 [INFO][4183] ipam.go 1216: Successfully claimed IPs: [192.168.85.66/26] block=192.168.85.64/26 handle="k8s-pod-network.33dddd0be8dc1f52683b792fb295acbd261309ec15ed07830594ad5c7d73340c" host="ci-4012-0-0-8-d63f105dc7.novalocal" Jun 25 19:08:08.134862 containerd[1458]: 2024-06-25 19:08:08.098 [INFO][4183] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.85.66/26] handle="k8s-pod-network.33dddd0be8dc1f52683b792fb295acbd261309ec15ed07830594ad5c7d73340c" host="ci-4012-0-0-8-d63f105dc7.novalocal" Jun 25 19:08:08.134862 containerd[1458]: 2024-06-25 19:08:08.098 [INFO][4183] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 19:08:08.134862 containerd[1458]: 2024-06-25 19:08:08.098 [INFO][4183] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.85.66/26] IPv6=[] ContainerID="33dddd0be8dc1f52683b792fb295acbd261309ec15ed07830594ad5c7d73340c" HandleID="k8s-pod-network.33dddd0be8dc1f52683b792fb295acbd261309ec15ed07830594ad5c7d73340c" Workload="ci--4012--0--0--8--d63f105dc7.novalocal-k8s-csi--node--driver--r7zkp-eth0" Jun 25 19:08:08.135508 containerd[1458]: 2024-06-25 19:08:08.101 [INFO][4165] k8s.go 386: Populated endpoint ContainerID="33dddd0be8dc1f52683b792fb295acbd261309ec15ed07830594ad5c7d73340c" Namespace="calico-system" Pod="csi-node-driver-r7zkp" WorkloadEndpoint="ci--4012--0--0--8--d63f105dc7.novalocal-k8s-csi--node--driver--r7zkp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012--0--0--8--d63f105dc7.novalocal-k8s-csi--node--driver--r7zkp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1276367b-2bea-4184-b5ed-849c23171592", ResourceVersion:"773", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 19, 7, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012-0-0-8-d63f105dc7.novalocal", ContainerID:"", Pod:"csi-node-driver-r7zkp", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.85.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali84283737640", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 19:08:08.135508 containerd[1458]: 2024-06-25 19:08:08.102 [INFO][4165] k8s.go 387: Calico CNI using IPs: [192.168.85.66/32] ContainerID="33dddd0be8dc1f52683b792fb295acbd261309ec15ed07830594ad5c7d73340c" Namespace="calico-system" Pod="csi-node-driver-r7zkp" WorkloadEndpoint="ci--4012--0--0--8--d63f105dc7.novalocal-k8s-csi--node--driver--r7zkp-eth0" Jun 25 19:08:08.135508 containerd[1458]: 2024-06-25 19:08:08.102 [INFO][4165] dataplane_linux.go 68: Setting the host side veth name to cali84283737640 ContainerID="33dddd0be8dc1f52683b792fb295acbd261309ec15ed07830594ad5c7d73340c" Namespace="calico-system" Pod="csi-node-driver-r7zkp" WorkloadEndpoint="ci--4012--0--0--8--d63f105dc7.novalocal-k8s-csi--node--driver--r7zkp-eth0" Jun 25 19:08:08.135508 containerd[1458]: 2024-06-25 19:08:08.108 [INFO][4165] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="33dddd0be8dc1f52683b792fb295acbd261309ec15ed07830594ad5c7d73340c" Namespace="calico-system" Pod="csi-node-driver-r7zkp" WorkloadEndpoint="ci--4012--0--0--8--d63f105dc7.novalocal-k8s-csi--node--driver--r7zkp-eth0" Jun 25 19:08:08.135508 containerd[1458]: 2024-06-25 19:08:08.109 [INFO][4165] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="33dddd0be8dc1f52683b792fb295acbd261309ec15ed07830594ad5c7d73340c" Namespace="calico-system" Pod="csi-node-driver-r7zkp" WorkloadEndpoint="ci--4012--0--0--8--d63f105dc7.novalocal-k8s-csi--node--driver--r7zkp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012--0--0--8--d63f105dc7.novalocal-k8s-csi--node--driver--r7zkp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1276367b-2bea-4184-b5ed-849c23171592", ResourceVersion:"773", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 19, 7, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012-0-0-8-d63f105dc7.novalocal", ContainerID:"33dddd0be8dc1f52683b792fb295acbd261309ec15ed07830594ad5c7d73340c", Pod:"csi-node-driver-r7zkp", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.85.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali84283737640", MAC:"3e:d5:2a:2d:af:d6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 19:08:08.135508 containerd[1458]: 2024-06-25 19:08:08.130 [INFO][4165] k8s.go 500: Wrote updated endpoint to datastore ContainerID="33dddd0be8dc1f52683b792fb295acbd261309ec15ed07830594ad5c7d73340c" Namespace="calico-system" Pod="csi-node-driver-r7zkp" WorkloadEndpoint="ci--4012--0--0--8--d63f105dc7.novalocal-k8s-csi--node--driver--r7zkp-eth0" Jun 25 19:08:08.148131 containerd[1458]: time="2024-06-25T19:08:08.146283594Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 19:08:08.148131 containerd[1458]: time="2024-06-25T19:08:08.146481455Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 19:08:08.148131 containerd[1458]: time="2024-06-25T19:08:08.146521851Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 19:08:08.148131 containerd[1458]: time="2024-06-25T19:08:08.146607412Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 19:08:08.167648 systemd-networkd[1361]: calicb7ca39abc9: Link UP Jun 25 19:08:08.168237 systemd-networkd[1361]: calicb7ca39abc9: Gained carrier Jun 25 19:08:08.202651 containerd[1458]: 2024-06-25 19:08:07.916 [INFO][4154] utils.go 100: File /var/lib/calico/mtu does not exist Jun 25 19:08:08.202651 containerd[1458]: 2024-06-25 19:08:07.940 [INFO][4154] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4012--0--0--8--d63f105dc7.novalocal-k8s-coredns--5dd5756b68--wkwjk-eth0 coredns-5dd5756b68- kube-system b8d477a9-5ba0-42a2-8679-d220a4893fb5 772 0 2024-06-25 19:07:31 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4012-0-0-8-d63f105dc7.novalocal coredns-5dd5756b68-wkwjk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calicb7ca39abc9 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="ca9b02efa149b18bc47b51f01a615be880e2ac25d76d65658201ec3d88d475a3" Namespace="kube-system" Pod="coredns-5dd5756b68-wkwjk" WorkloadEndpoint="ci--4012--0--0--8--d63f105dc7.novalocal-k8s-coredns--5dd5756b68--wkwjk-" Jun 25 19:08:08.202651 containerd[1458]: 2024-06-25 19:08:07.940 [INFO][4154] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ca9b02efa149b18bc47b51f01a615be880e2ac25d76d65658201ec3d88d475a3" Namespace="kube-system" Pod="coredns-5dd5756b68-wkwjk" WorkloadEndpoint="ci--4012--0--0--8--d63f105dc7.novalocal-k8s-coredns--5dd5756b68--wkwjk-eth0" Jun 25 19:08:08.202651 containerd[1458]: 2024-06-25 19:08:07.991 [INFO][4188] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ca9b02efa149b18bc47b51f01a615be880e2ac25d76d65658201ec3d88d475a3" HandleID="k8s-pod-network.ca9b02efa149b18bc47b51f01a615be880e2ac25d76d65658201ec3d88d475a3" Workload="ci--4012--0--0--8--d63f105dc7.novalocal-k8s-coredns--5dd5756b68--wkwjk-eth0" Jun 25 19:08:08.202651 containerd[1458]: 2024-06-25 19:08:08.010 [INFO][4188] ipam_plugin.go 264: Auto assigning IP ContainerID="ca9b02efa149b18bc47b51f01a615be880e2ac25d76d65658201ec3d88d475a3" HandleID="k8s-pod-network.ca9b02efa149b18bc47b51f01a615be880e2ac25d76d65658201ec3d88d475a3" Workload="ci--4012--0--0--8--d63f105dc7.novalocal-k8s-coredns--5dd5756b68--wkwjk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031aae0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4012-0-0-8-d63f105dc7.novalocal", "pod":"coredns-5dd5756b68-wkwjk", "timestamp":"2024-06-25 19:08:07.991724114 +0000 UTC"}, Hostname:"ci-4012-0-0-8-d63f105dc7.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 19:08:08.202651 containerd[1458]: 2024-06-25 19:08:08.010 [INFO][4188] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 19:08:08.202651 containerd[1458]: 2024-06-25 19:08:08.099 [INFO][4188] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 19:08:08.202651 containerd[1458]: 2024-06-25 19:08:08.099 [INFO][4188] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4012-0-0-8-d63f105dc7.novalocal' Jun 25 19:08:08.202651 containerd[1458]: 2024-06-25 19:08:08.109 [INFO][4188] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ca9b02efa149b18bc47b51f01a615be880e2ac25d76d65658201ec3d88d475a3" host="ci-4012-0-0-8-d63f105dc7.novalocal" Jun 25 19:08:08.202651 containerd[1458]: 2024-06-25 19:08:08.116 [INFO][4188] ipam.go 372: Looking up existing affinities for host host="ci-4012-0-0-8-d63f105dc7.novalocal" Jun 25 19:08:08.202651 containerd[1458]: 2024-06-25 19:08:08.123 [INFO][4188] ipam.go 489: Trying affinity for 192.168.85.64/26 host="ci-4012-0-0-8-d63f105dc7.novalocal" Jun 25 19:08:08.202651 containerd[1458]: 2024-06-25 19:08:08.133 [INFO][4188] ipam.go 155: Attempting to load block cidr=192.168.85.64/26 host="ci-4012-0-0-8-d63f105dc7.novalocal" Jun 25 19:08:08.202651 containerd[1458]: 2024-06-25 19:08:08.139 [INFO][4188] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.85.64/26 host="ci-4012-0-0-8-d63f105dc7.novalocal" Jun 25 19:08:08.202651 containerd[1458]: 2024-06-25 19:08:08.139 [INFO][4188] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.85.64/26 handle="k8s-pod-network.ca9b02efa149b18bc47b51f01a615be880e2ac25d76d65658201ec3d88d475a3" host="ci-4012-0-0-8-d63f105dc7.novalocal" Jun 25 19:08:08.202651 containerd[1458]: 2024-06-25 19:08:08.143 [INFO][4188] ipam.go 1685: Creating new handle: k8s-pod-network.ca9b02efa149b18bc47b51f01a615be880e2ac25d76d65658201ec3d88d475a3 Jun 25 19:08:08.202651 containerd[1458]: 2024-06-25 19:08:08.148 [INFO][4188] ipam.go 1203: Writing block in order to claim IPs block=192.168.85.64/26 handle="k8s-pod-network.ca9b02efa149b18bc47b51f01a615be880e2ac25d76d65658201ec3d88d475a3" host="ci-4012-0-0-8-d63f105dc7.novalocal" Jun 25 19:08:08.202651 containerd[1458]: 2024-06-25 19:08:08.157 [INFO][4188] ipam.go 1216: Successfully claimed IPs: [192.168.85.67/26] block=192.168.85.64/26 handle="k8s-pod-network.ca9b02efa149b18bc47b51f01a615be880e2ac25d76d65658201ec3d88d475a3" host="ci-4012-0-0-8-d63f105dc7.novalocal" Jun 25 19:08:08.202651 containerd[1458]: 2024-06-25 19:08:08.157 [INFO][4188] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.85.67/26] handle="k8s-pod-network.ca9b02efa149b18bc47b51f01a615be880e2ac25d76d65658201ec3d88d475a3" host="ci-4012-0-0-8-d63f105dc7.novalocal" Jun 25 19:08:08.202651 containerd[1458]: 2024-06-25 19:08:08.158 [INFO][4188] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 19:08:08.202651 containerd[1458]: 2024-06-25 19:08:08.158 [INFO][4188] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.85.67/26] IPv6=[] ContainerID="ca9b02efa149b18bc47b51f01a615be880e2ac25d76d65658201ec3d88d475a3" HandleID="k8s-pod-network.ca9b02efa149b18bc47b51f01a615be880e2ac25d76d65658201ec3d88d475a3" Workload="ci--4012--0--0--8--d63f105dc7.novalocal-k8s-coredns--5dd5756b68--wkwjk-eth0" Jun 25 19:08:08.206575 containerd[1458]: 2024-06-25 19:08:08.162 [INFO][4154] k8s.go 386: Populated endpoint ContainerID="ca9b02efa149b18bc47b51f01a615be880e2ac25d76d65658201ec3d88d475a3" Namespace="kube-system" Pod="coredns-5dd5756b68-wkwjk" WorkloadEndpoint="ci--4012--0--0--8--d63f105dc7.novalocal-k8s-coredns--5dd5756b68--wkwjk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012--0--0--8--d63f105dc7.novalocal-k8s-coredns--5dd5756b68--wkwjk-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"b8d477a9-5ba0-42a2-8679-d220a4893fb5", ResourceVersion:"772", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 19, 7, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012-0-0-8-d63f105dc7.novalocal", ContainerID:"", Pod:"coredns-5dd5756b68-wkwjk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.85.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicb7ca39abc9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 19:08:08.206575 containerd[1458]: 2024-06-25 19:08:08.162 [INFO][4154] k8s.go 387: Calico CNI using IPs: [192.168.85.67/32] ContainerID="ca9b02efa149b18bc47b51f01a615be880e2ac25d76d65658201ec3d88d475a3" Namespace="kube-system" Pod="coredns-5dd5756b68-wkwjk" WorkloadEndpoint="ci--4012--0--0--8--d63f105dc7.novalocal-k8s-coredns--5dd5756b68--wkwjk-eth0" Jun 25 19:08:08.206575 containerd[1458]: 2024-06-25 19:08:08.162 [INFO][4154] dataplane_linux.go 68: Setting the host side veth name to calicb7ca39abc9 ContainerID="ca9b02efa149b18bc47b51f01a615be880e2ac25d76d65658201ec3d88d475a3" Namespace="kube-system" Pod="coredns-5dd5756b68-wkwjk" WorkloadEndpoint="ci--4012--0--0--8--d63f105dc7.novalocal-k8s-coredns--5dd5756b68--wkwjk-eth0" Jun 25 19:08:08.206575 containerd[1458]: 2024-06-25 19:08:08.169 [INFO][4154] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="ca9b02efa149b18bc47b51f01a615be880e2ac25d76d65658201ec3d88d475a3" Namespace="kube-system" Pod="coredns-5dd5756b68-wkwjk" WorkloadEndpoint="ci--4012--0--0--8--d63f105dc7.novalocal-k8s-coredns--5dd5756b68--wkwjk-eth0" Jun 25 19:08:08.206575 containerd[1458]: 2024-06-25 19:08:08.173 [INFO][4154] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ca9b02efa149b18bc47b51f01a615be880e2ac25d76d65658201ec3d88d475a3" Namespace="kube-system" Pod="coredns-5dd5756b68-wkwjk" WorkloadEndpoint="ci--4012--0--0--8--d63f105dc7.novalocal-k8s-coredns--5dd5756b68--wkwjk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012--0--0--8--d63f105dc7.novalocal-k8s-coredns--5dd5756b68--wkwjk-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"b8d477a9-5ba0-42a2-8679-d220a4893fb5", ResourceVersion:"772", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 19, 7, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012-0-0-8-d63f105dc7.novalocal", ContainerID:"ca9b02efa149b18bc47b51f01a615be880e2ac25d76d65658201ec3d88d475a3", Pod:"coredns-5dd5756b68-wkwjk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.85.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicb7ca39abc9", MAC:"ee:0d:15:8d:b8:ab", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 19:08:08.206575 containerd[1458]: 2024-06-25 19:08:08.198 [INFO][4154] k8s.go 500: Wrote updated endpoint to datastore ContainerID="ca9b02efa149b18bc47b51f01a615be880e2ac25d76d65658201ec3d88d475a3" Namespace="kube-system" Pod="coredns-5dd5756b68-wkwjk" WorkloadEndpoint="ci--4012--0--0--8--d63f105dc7.novalocal-k8s-coredns--5dd5756b68--wkwjk-eth0" Jun 25 19:08:08.203948 systemd[1]: Started cri-containerd-50dea36c846fadf2031446699ec3963cb85f66bd56ec7f129a950f77a25d444c.scope - libcontainer container 50dea36c846fadf2031446699ec3963cb85f66bd56ec7f129a950f77a25d444c. Jun 25 19:08:08.218607 containerd[1458]: time="2024-06-25T19:08:08.216414880Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 19:08:08.219007 containerd[1458]: time="2024-06-25T19:08:08.218165162Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 19:08:08.219007 containerd[1458]: time="2024-06-25T19:08:08.218280458Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 19:08:08.219007 containerd[1458]: time="2024-06-25T19:08:08.218302610Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 19:08:08.270898 systemd[1]: Started cri-containerd-33dddd0be8dc1f52683b792fb295acbd261309ec15ed07830594ad5c7d73340c.scope - libcontainer container 33dddd0be8dc1f52683b792fb295acbd261309ec15ed07830594ad5c7d73340c. Jun 25 19:08:08.295507 containerd[1458]: time="2024-06-25T19:08:08.295319407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-948b949f9-lffnp,Uid:64e26d3e-5506-4e69-921b-3b06d3154cdc,Namespace:calico-system,Attempt:1,} returns sandbox id \"50dea36c846fadf2031446699ec3963cb85f66bd56ec7f129a950f77a25d444c\"" Jun 25 19:08:08.305631 containerd[1458]: time="2024-06-25T19:08:08.303868880Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 19:08:08.305631 containerd[1458]: time="2024-06-25T19:08:08.305030218Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 19:08:08.305631 containerd[1458]: time="2024-06-25T19:08:08.305110648Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 19:08:08.305631 containerd[1458]: time="2024-06-25T19:08:08.305126348Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 19:08:08.321985 containerd[1458]: time="2024-06-25T19:08:08.321914089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r7zkp,Uid:1276367b-2bea-4184-b5ed-849c23171592,Namespace:calico-system,Attempt:1,} returns sandbox id \"33dddd0be8dc1f52683b792fb295acbd261309ec15ed07830594ad5c7d73340c\"" Jun 25 19:08:08.339666 containerd[1458]: time="2024-06-25T19:08:08.339553147Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"" Jun 25 19:08:08.355240 systemd[1]: Started cri-containerd-ca9b02efa149b18bc47b51f01a615be880e2ac25d76d65658201ec3d88d475a3.scope - libcontainer container ca9b02efa149b18bc47b51f01a615be880e2ac25d76d65658201ec3d88d475a3. Jun 25 19:08:08.416180 containerd[1458]: time="2024-06-25T19:08:08.416103471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-wkwjk,Uid:b8d477a9-5ba0-42a2-8679-d220a4893fb5,Namespace:kube-system,Attempt:1,} returns sandbox id \"ca9b02efa149b18bc47b51f01a615be880e2ac25d76d65658201ec3d88d475a3\"" Jun 25 19:08:08.422329 containerd[1458]: time="2024-06-25T19:08:08.422024453Z" level=info msg="CreateContainer within sandbox \"ca9b02efa149b18bc47b51f01a615be880e2ac25d76d65658201ec3d88d475a3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 19:08:08.452616 containerd[1458]: time="2024-06-25T19:08:08.452421306Z" level=info msg="CreateContainer within sandbox \"ca9b02efa149b18bc47b51f01a615be880e2ac25d76d65658201ec3d88d475a3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"60bc76307b93092ac7d50d1fb29407222ab69e78437929de6506bdc3710f6aee\"" Jun 25 19:08:08.455308 containerd[1458]: time="2024-06-25T19:08:08.453390894Z" level=info msg="StartContainer for \"60bc76307b93092ac7d50d1fb29407222ab69e78437929de6506bdc3710f6aee\"" Jun 25 19:08:08.492924 systemd[1]: Started cri-containerd-60bc76307b93092ac7d50d1fb29407222ab69e78437929de6506bdc3710f6aee.scope - libcontainer container 60bc76307b93092ac7d50d1fb29407222ab69e78437929de6506bdc3710f6aee. Jun 25 19:08:08.538890 containerd[1458]: time="2024-06-25T19:08:08.535383512Z" level=info msg="StartContainer for \"60bc76307b93092ac7d50d1fb29407222ab69e78437929de6506bdc3710f6aee\" returns successfully" Jun 25 19:08:09.237008 systemd-networkd[1361]: cali84283737640: Gained IPv6LL Jun 25 19:08:09.242862 containerd[1458]: time="2024-06-25T19:08:09.242389755Z" level=info msg="StopPodSandbox for \"6c05d4f7f01b27f0430f187198eb9e1fdf67b785fdd709209337cdfcdc79de47\"" Jun 25 19:08:09.252235 kubelet[2569]: I0625 19:08:09.252197 2569 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 25 19:08:09.320716 kubelet[2569]: I0625 19:08:09.319913 2569 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-wkwjk" podStartSLOduration=38.319870383 podCreationTimestamp="2024-06-25 19:07:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 19:08:08.687023068 +0000 UTC m=+50.734932219" watchObservedRunningTime="2024-06-25 19:08:09.319870383 +0000 UTC m=+51.367779514" Jun 25 19:08:09.393549 containerd[1458]: 2024-06-25 19:08:09.318 [INFO][4434] k8s.go 608: Cleaning up netns ContainerID="6c05d4f7f01b27f0430f187198eb9e1fdf67b785fdd709209337cdfcdc79de47" Jun 25 19:08:09.393549 containerd[1458]: 2024-06-25 19:08:09.319 [INFO][4434] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="6c05d4f7f01b27f0430f187198eb9e1fdf67b785fdd709209337cdfcdc79de47" iface="eth0" netns="/var/run/netns/cni-d06e30db-9594-4d38-a371-1fc529eb7183" Jun 25 19:08:09.393549 containerd[1458]: 2024-06-25 19:08:09.319 [INFO][4434] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="6c05d4f7f01b27f0430f187198eb9e1fdf67b785fdd709209337cdfcdc79de47" iface="eth0" netns="/var/run/netns/cni-d06e30db-9594-4d38-a371-1fc529eb7183" Jun 25 19:08:09.393549 containerd[1458]: 2024-06-25 19:08:09.320 [INFO][4434] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="6c05d4f7f01b27f0430f187198eb9e1fdf67b785fdd709209337cdfcdc79de47" iface="eth0" netns="/var/run/netns/cni-d06e30db-9594-4d38-a371-1fc529eb7183" Jun 25 19:08:09.393549 containerd[1458]: 2024-06-25 19:08:09.320 [INFO][4434] k8s.go 615: Releasing IP address(es) ContainerID="6c05d4f7f01b27f0430f187198eb9e1fdf67b785fdd709209337cdfcdc79de47" Jun 25 19:08:09.393549 containerd[1458]: 2024-06-25 19:08:09.320 [INFO][4434] utils.go 188: Calico CNI releasing IP address ContainerID="6c05d4f7f01b27f0430f187198eb9e1fdf67b785fdd709209337cdfcdc79de47" Jun 25 19:08:09.393549 containerd[1458]: 2024-06-25 19:08:09.370 [INFO][4443] ipam_plugin.go 411: Releasing address using handleID ContainerID="6c05d4f7f01b27f0430f187198eb9e1fdf67b785fdd709209337cdfcdc79de47" HandleID="k8s-pod-network.6c05d4f7f01b27f0430f187198eb9e1fdf67b785fdd709209337cdfcdc79de47" Workload="ci--4012--0--0--8--d63f105dc7.novalocal-k8s-coredns--5dd5756b68--kxsw8-eth0" Jun 25 19:08:09.393549 containerd[1458]: 2024-06-25 19:08:09.371 [INFO][4443] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 19:08:09.393549 containerd[1458]: 2024-06-25 19:08:09.371 [INFO][4443] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 19:08:09.393549 containerd[1458]: 2024-06-25 19:08:09.382 [WARNING][4443] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="6c05d4f7f01b27f0430f187198eb9e1fdf67b785fdd709209337cdfcdc79de47" HandleID="k8s-pod-network.6c05d4f7f01b27f0430f187198eb9e1fdf67b785fdd709209337cdfcdc79de47" Workload="ci--4012--0--0--8--d63f105dc7.novalocal-k8s-coredns--5dd5756b68--kxsw8-eth0" Jun 25 19:08:09.393549 containerd[1458]: 2024-06-25 19:08:09.382 [INFO][4443] ipam_plugin.go 439: Releasing address using workloadID ContainerID="6c05d4f7f01b27f0430f187198eb9e1fdf67b785fdd709209337cdfcdc79de47" HandleID="k8s-pod-network.6c05d4f7f01b27f0430f187198eb9e1fdf67b785fdd709209337cdfcdc79de47" Workload="ci--4012--0--0--8--d63f105dc7.novalocal-k8s-coredns--5dd5756b68--kxsw8-eth0" Jun 25 19:08:09.393549 containerd[1458]: 2024-06-25 19:08:09.387 [INFO][4443] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 19:08:09.393549 containerd[1458]: 2024-06-25 19:08:09.392 [INFO][4434] k8s.go 621: Teardown processing complete. ContainerID="6c05d4f7f01b27f0430f187198eb9e1fdf67b785fdd709209337cdfcdc79de47" Jun 25 19:08:09.395957 containerd[1458]: time="2024-06-25T19:08:09.395154442Z" level=info msg="TearDown network for sandbox \"6c05d4f7f01b27f0430f187198eb9e1fdf67b785fdd709209337cdfcdc79de47\" successfully" Jun 25 19:08:09.395957 containerd[1458]: time="2024-06-25T19:08:09.395260781Z" level=info msg="StopPodSandbox for \"6c05d4f7f01b27f0430f187198eb9e1fdf67b785fdd709209337cdfcdc79de47\" returns successfully" Jun 25 19:08:09.399289 systemd[1]: run-netns-cni\x2dd06e30db\x2d9594\x2d4d38\x2da371\x2d1fc529eb7183.mount: Deactivated successfully. Jun 25 19:08:09.403164 containerd[1458]: time="2024-06-25T19:08:09.402161943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-kxsw8,Uid:2a923228-560b-43c0-8f94-d47d8f47139d,Namespace:kube-system,Attempt:1,}" Jun 25 19:08:09.556864 systemd-networkd[1361]: cali547a8d41047: Gained IPv6LL Jun 25 19:08:09.663910 systemd-networkd[1361]: calic0b49e8d29d: Link UP Jun 25 19:08:09.664102 systemd-networkd[1361]: calic0b49e8d29d: Gained carrier Jun 25 19:08:09.690267 containerd[1458]: 2024-06-25 19:08:09.500 [INFO][4450] utils.go 100: File /var/lib/calico/mtu does not exist Jun 25 19:08:09.690267 containerd[1458]: 2024-06-25 19:08:09.533 [INFO][4450] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4012--0--0--8--d63f105dc7.novalocal-k8s-coredns--5dd5756b68--kxsw8-eth0 coredns-5dd5756b68- kube-system 2a923228-560b-43c0-8f94-d47d8f47139d 797 0 2024-06-25 19:07:31 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4012-0-0-8-d63f105dc7.novalocal coredns-5dd5756b68-kxsw8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic0b49e8d29d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="75062d478628a3e3fbea57d1c8b36054db38d48a4a3405b60f5650e271bd86fe" Namespace="kube-system" Pod="coredns-5dd5756b68-kxsw8" WorkloadEndpoint="ci--4012--0--0--8--d63f105dc7.novalocal-k8s-coredns--5dd5756b68--kxsw8-" Jun 25 19:08:09.690267 containerd[1458]: 2024-06-25 19:08:09.533 [INFO][4450] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="75062d478628a3e3fbea57d1c8b36054db38d48a4a3405b60f5650e271bd86fe" Namespace="kube-system" Pod="coredns-5dd5756b68-kxsw8" WorkloadEndpoint="ci--4012--0--0--8--d63f105dc7.novalocal-k8s-coredns--5dd5756b68--kxsw8-eth0" Jun 25 19:08:09.690267 containerd[1458]: 2024-06-25 19:08:09.609 [INFO][4486] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="75062d478628a3e3fbea57d1c8b36054db38d48a4a3405b60f5650e271bd86fe" HandleID="k8s-pod-network.75062d478628a3e3fbea57d1c8b36054db38d48a4a3405b60f5650e271bd86fe" Workload="ci--4012--0--0--8--d63f105dc7.novalocal-k8s-coredns--5dd5756b68--kxsw8-eth0" Jun 25 19:08:09.690267 containerd[1458]: 2024-06-25 19:08:09.622 [INFO][4486] ipam_plugin.go 264: Auto assigning IP ContainerID="75062d478628a3e3fbea57d1c8b36054db38d48a4a3405b60f5650e271bd86fe" HandleID="k8s-pod-network.75062d478628a3e3fbea57d1c8b36054db38d48a4a3405b60f5650e271bd86fe" Workload="ci--4012--0--0--8--d63f105dc7.novalocal-k8s-coredns--5dd5756b68--kxsw8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000378740), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4012-0-0-8-d63f105dc7.novalocal", "pod":"coredns-5dd5756b68-kxsw8", "timestamp":"2024-06-25 19:08:09.609679037 +0000 UTC"}, Hostname:"ci-4012-0-0-8-d63f105dc7.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 19:08:09.690267 containerd[1458]: 2024-06-25 19:08:09.623 [INFO][4486] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 19:08:09.690267 containerd[1458]: 2024-06-25 19:08:09.623 [INFO][4486] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 19:08:09.690267 containerd[1458]: 2024-06-25 19:08:09.623 [INFO][4486] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4012-0-0-8-d63f105dc7.novalocal' Jun 25 19:08:09.690267 containerd[1458]: 2024-06-25 19:08:09.625 [INFO][4486] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.75062d478628a3e3fbea57d1c8b36054db38d48a4a3405b60f5650e271bd86fe" host="ci-4012-0-0-8-d63f105dc7.novalocal" Jun 25 19:08:09.690267 containerd[1458]: 2024-06-25 19:08:09.630 [INFO][4486] ipam.go 372: Looking up existing affinities for host host="ci-4012-0-0-8-d63f105dc7.novalocal" Jun 25 19:08:09.690267 containerd[1458]: 2024-06-25 19:08:09.636 [INFO][4486] ipam.go 489: Trying affinity for 192.168.85.64/26 host="ci-4012-0-0-8-d63f105dc7.novalocal" Jun 25 19:08:09.690267 containerd[1458]: 2024-06-25 19:08:09.638 [INFO][4486] ipam.go 155: Attempting to load block cidr=192.168.85.64/26 host="ci-4012-0-0-8-d63f105dc7.novalocal" Jun 25 19:08:09.690267 containerd[1458]: 2024-06-25 19:08:09.642 [INFO][4486] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.85.64/26 host="ci-4012-0-0-8-d63f105dc7.novalocal" Jun 25 19:08:09.690267 containerd[1458]: 2024-06-25 19:08:09.642 [INFO][4486] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.85.64/26 handle="k8s-pod-network.75062d478628a3e3fbea57d1c8b36054db38d48a4a3405b60f5650e271bd86fe" host="ci-4012-0-0-8-d63f105dc7.novalocal" Jun 25 19:08:09.690267 containerd[1458]: 2024-06-25 19:08:09.644 [INFO][4486] ipam.go 1685: Creating new handle: k8s-pod-network.75062d478628a3e3fbea57d1c8b36054db38d48a4a3405b60f5650e271bd86fe Jun 25 19:08:09.690267 containerd[1458]: 2024-06-25 19:08:09.648 [INFO][4486] ipam.go 1203: Writing block in order to claim IPs block=192.168.85.64/26 handle="k8s-pod-network.75062d478628a3e3fbea57d1c8b36054db38d48a4a3405b60f5650e271bd86fe" host="ci-4012-0-0-8-d63f105dc7.novalocal" Jun 25 19:08:09.690267 containerd[1458]: 2024-06-25 19:08:09.655 [INFO][4486] ipam.go 1216: Successfully claimed IPs: [192.168.85.68/26] block=192.168.85.64/26 handle="k8s-pod-network.75062d478628a3e3fbea57d1c8b36054db38d48a4a3405b60f5650e271bd86fe" host="ci-4012-0-0-8-d63f105dc7.novalocal" Jun 25 19:08:09.690267 containerd[1458]: 2024-06-25 19:08:09.656 [INFO][4486] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.85.68/26] handle="k8s-pod-network.75062d478628a3e3fbea57d1c8b36054db38d48a4a3405b60f5650e271bd86fe" host="ci-4012-0-0-8-d63f105dc7.novalocal" Jun 25 19:08:09.690267 containerd[1458]: 2024-06-25 19:08:09.656 [INFO][4486] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 19:08:09.690267 containerd[1458]: 2024-06-25 19:08:09.656 [INFO][4486] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.85.68/26] IPv6=[] ContainerID="75062d478628a3e3fbea57d1c8b36054db38d48a4a3405b60f5650e271bd86fe" HandleID="k8s-pod-network.75062d478628a3e3fbea57d1c8b36054db38d48a4a3405b60f5650e271bd86fe" Workload="ci--4012--0--0--8--d63f105dc7.novalocal-k8s-coredns--5dd5756b68--kxsw8-eth0" Jun 25 19:08:09.697153 containerd[1458]: 2024-06-25 19:08:09.659 [INFO][4450] k8s.go 386: Populated endpoint ContainerID="75062d478628a3e3fbea57d1c8b36054db38d48a4a3405b60f5650e271bd86fe" Namespace="kube-system" Pod="coredns-5dd5756b68-kxsw8" WorkloadEndpoint="ci--4012--0--0--8--d63f105dc7.novalocal-k8s-coredns--5dd5756b68--kxsw8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012--0--0--8--d63f105dc7.novalocal-k8s-coredns--5dd5756b68--kxsw8-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"2a923228-560b-43c0-8f94-d47d8f47139d", ResourceVersion:"797", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 19, 7, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012-0-0-8-d63f105dc7.novalocal", ContainerID:"", Pod:"coredns-5dd5756b68-kxsw8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.85.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic0b49e8d29d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 19:08:09.697153 containerd[1458]: 2024-06-25 19:08:09.659 [INFO][4450] k8s.go 387: Calico CNI using IPs: [192.168.85.68/32] ContainerID="75062d478628a3e3fbea57d1c8b36054db38d48a4a3405b60f5650e271bd86fe" Namespace="kube-system" Pod="coredns-5dd5756b68-kxsw8" WorkloadEndpoint="ci--4012--0--0--8--d63f105dc7.novalocal-k8s-coredns--5dd5756b68--kxsw8-eth0" Jun 25 19:08:09.697153 containerd[1458]: 2024-06-25 19:08:09.659 [INFO][4450] dataplane_linux.go 68: Setting the host side veth name to calic0b49e8d29d ContainerID="75062d478628a3e3fbea57d1c8b36054db38d48a4a3405b60f5650e271bd86fe" Namespace="kube-system" Pod="coredns-5dd5756b68-kxsw8" WorkloadEndpoint="ci--4012--0--0--8--d63f105dc7.novalocal-k8s-coredns--5dd5756b68--kxsw8-eth0" Jun 25 19:08:09.697153 containerd[1458]: 2024-06-25 19:08:09.664 [INFO][4450] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="75062d478628a3e3fbea57d1c8b36054db38d48a4a3405b60f5650e271bd86fe" Namespace="kube-system" Pod="coredns-5dd5756b68-kxsw8" WorkloadEndpoint="ci--4012--0--0--8--d63f105dc7.novalocal-k8s-coredns--5dd5756b68--kxsw8-eth0" Jun 25 19:08:09.697153 containerd[1458]: 2024-06-25 19:08:09.666 [INFO][4450] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="75062d478628a3e3fbea57d1c8b36054db38d48a4a3405b60f5650e271bd86fe" Namespace="kube-system" Pod="coredns-5dd5756b68-kxsw8" WorkloadEndpoint="ci--4012--0--0--8--d63f105dc7.novalocal-k8s-coredns--5dd5756b68--kxsw8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012--0--0--8--d63f105dc7.novalocal-k8s-coredns--5dd5756b68--kxsw8-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"2a923228-560b-43c0-8f94-d47d8f47139d", ResourceVersion:"797", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 19, 7, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012-0-0-8-d63f105dc7.novalocal", ContainerID:"75062d478628a3e3fbea57d1c8b36054db38d48a4a3405b60f5650e271bd86fe", Pod:"coredns-5dd5756b68-kxsw8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.85.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic0b49e8d29d", MAC:"4e:85:80:8c:44:f2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 19:08:09.697153 containerd[1458]: 2024-06-25 19:08:09.681 [INFO][4450] k8s.go 500: Wrote updated endpoint to datastore ContainerID="75062d478628a3e3fbea57d1c8b36054db38d48a4a3405b60f5650e271bd86fe" Namespace="kube-system" Pod="coredns-5dd5756b68-kxsw8" WorkloadEndpoint="ci--4012--0--0--8--d63f105dc7.novalocal-k8s-coredns--5dd5756b68--kxsw8-eth0" Jun 25 19:08:09.750591 containerd[1458]: time="2024-06-25T19:08:09.750230751Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 19:08:09.750983 containerd[1458]: time="2024-06-25T19:08:09.750673682Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 19:08:09.751332 containerd[1458]: time="2024-06-25T19:08:09.750797795Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 19:08:09.751332 containerd[1458]: time="2024-06-25T19:08:09.751182006Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 19:08:09.802104 systemd[1]: Started cri-containerd-75062d478628a3e3fbea57d1c8b36054db38d48a4a3405b60f5650e271bd86fe.scope - libcontainer container 75062d478628a3e3fbea57d1c8b36054db38d48a4a3405b60f5650e271bd86fe. Jun 25 19:08:09.890763 containerd[1458]: time="2024-06-25T19:08:09.890628207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-kxsw8,Uid:2a923228-560b-43c0-8f94-d47d8f47139d,Namespace:kube-system,Attempt:1,} returns sandbox id \"75062d478628a3e3fbea57d1c8b36054db38d48a4a3405b60f5650e271bd86fe\"" Jun 25 19:08:09.898533 containerd[1458]: time="2024-06-25T19:08:09.898256732Z" level=info msg="CreateContainer within sandbox \"75062d478628a3e3fbea57d1c8b36054db38d48a4a3405b60f5650e271bd86fe\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 19:08:09.930468 containerd[1458]: time="2024-06-25T19:08:09.930268414Z" level=info msg="CreateContainer within sandbox \"75062d478628a3e3fbea57d1c8b36054db38d48a4a3405b60f5650e271bd86fe\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5d9e3d8720c2f07cd9f8da2bfd158b68bf0e032ba9582085a9b6c8e05655860b\"" Jun 25 19:08:09.932777 containerd[1458]: time="2024-06-25T19:08:09.931401799Z" level=info msg="StartContainer for \"5d9e3d8720c2f07cd9f8da2bfd158b68bf0e032ba9582085a9b6c8e05655860b\"" Jun 25 19:08:09.994145 systemd[1]: Started cri-containerd-5d9e3d8720c2f07cd9f8da2bfd158b68bf0e032ba9582085a9b6c8e05655860b.scope - libcontainer container 5d9e3d8720c2f07cd9f8da2bfd158b68bf0e032ba9582085a9b6c8e05655860b. Jun 25 19:08:10.004891 systemd-networkd[1361]: calicb7ca39abc9: Gained IPv6LL Jun 25 19:08:10.032471 containerd[1458]: time="2024-06-25T19:08:10.032025858Z" level=info msg="StartContainer for \"5d9e3d8720c2f07cd9f8da2bfd158b68bf0e032ba9582085a9b6c8e05655860b\" returns successfully" Jun 25 19:08:11.478204 systemd-networkd[1361]: calic0b49e8d29d: Gained IPv6LL Jun 25 19:08:11.703845 kubelet[2569]: I0625 19:08:11.703598 2569 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-kxsw8" podStartSLOduration=40.703556487 podCreationTimestamp="2024-06-25 19:07:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 19:08:10.690741788 +0000 UTC m=+52.738650909" watchObservedRunningTime="2024-06-25 19:08:11.703556487 +0000 UTC m=+53.751465608" Jun 25 19:08:11.958690 kubelet[2569]: I0625 19:08:11.958365 2569 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 25 19:08:12.271448 containerd[1458]: time="2024-06-25T19:08:12.271291881Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 19:08:12.273389 containerd[1458]: time="2024-06-25T19:08:12.273326056Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.0: active requests=0, bytes read=33505793" Jun 25 19:08:12.273856 containerd[1458]: time="2024-06-25T19:08:12.273805235Z" level=info msg="ImageCreate event name:\"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 19:08:12.277819 containerd[1458]: time="2024-06-25T19:08:12.277781421Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 19:08:12.279377 containerd[1458]: time="2024-06-25T19:08:12.279337550Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" with image id \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\", size \"34953521\" in 3.939740019s" Jun 25 19:08:12.279439 containerd[1458]: time="2024-06-25T19:08:12.279375751Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\"" Jun 25 19:08:12.281310 containerd[1458]: time="2024-06-25T19:08:12.281280302Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Jun 25 19:08:12.322090 containerd[1458]: time="2024-06-25T19:08:12.322044949Z" level=info msg="CreateContainer within sandbox \"50dea36c846fadf2031446699ec3963cb85f66bd56ec7f129a950f77a25d444c\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jun 25 19:08:12.345842 containerd[1458]: time="2024-06-25T19:08:12.345792925Z" level=info msg="CreateContainer within sandbox \"50dea36c846fadf2031446699ec3963cb85f66bd56ec7f129a950f77a25d444c\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"df452c8056b14f7bd291647469ab992ecd3885e562d075d8bf54af791f3d4185\"" Jun 25 19:08:12.348148 containerd[1458]: time="2024-06-25T19:08:12.347142556Z" level=info msg="StartContainer for \"df452c8056b14f7bd291647469ab992ecd3885e562d075d8bf54af791f3d4185\"" Jun 25 19:08:12.413266 systemd[1]: Started cri-containerd-df452c8056b14f7bd291647469ab992ecd3885e562d075d8bf54af791f3d4185.scope - libcontainer container df452c8056b14f7bd291647469ab992ecd3885e562d075d8bf54af791f3d4185. Jun 25 19:08:12.491136 containerd[1458]: time="2024-06-25T19:08:12.491084904Z" level=info msg="StartContainer for \"df452c8056b14f7bd291647469ab992ecd3885e562d075d8bf54af791f3d4185\" returns successfully" Jun 25 19:08:12.865444 kubelet[2569]: I0625 19:08:12.863287 2569 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-948b949f9-lffnp" podStartSLOduration=30.903072349 podCreationTimestamp="2024-06-25 19:07:38 +0000 UTC" firstStartedPulling="2024-06-25 19:08:08.319491016 +0000 UTC m=+50.367400147" lastFinishedPulling="2024-06-25 19:08:12.279636851 +0000 UTC m=+54.327545982" observedRunningTime="2024-06-25 19:08:12.862043792 +0000 UTC m=+54.909952913" watchObservedRunningTime="2024-06-25 19:08:12.863218184 +0000 UTC m=+54.911127305" Jun 25 19:08:13.303505 systemd[1]: run-containerd-runc-k8s.io-df452c8056b14f7bd291647469ab992ecd3885e562d075d8bf54af791f3d4185-runc.sCOERX.mount: Deactivated successfully. Jun 25 19:08:14.050822 systemd-networkd[1361]: vxlan.calico: Link UP Jun 25 19:08:14.050830 systemd-networkd[1361]: vxlan.calico: Gained carrier Jun 25 19:08:14.615706 containerd[1458]: time="2024-06-25T19:08:14.615647901Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 19:08:14.616918 containerd[1458]: time="2024-06-25T19:08:14.616684695Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7641062" Jun 25 19:08:14.618138 containerd[1458]: time="2024-06-25T19:08:14.618091363Z" level=info msg="ImageCreate event name:\"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 19:08:14.620882 containerd[1458]: time="2024-06-25T19:08:14.620837232Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 19:08:14.622084 containerd[1458]: time="2024-06-25T19:08:14.622017847Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"9088822\" in 2.340700785s" Jun 25 19:08:14.622084 containerd[1458]: time="2024-06-25T19:08:14.622057661Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\"" Jun 25 19:08:14.625215 containerd[1458]: time="2024-06-25T19:08:14.624977497Z" level=info msg="CreateContainer within sandbox \"33dddd0be8dc1f52683b792fb295acbd261309ec15ed07830594ad5c7d73340c\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jun 25 19:08:14.671918 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3225702137.mount: Deactivated successfully. Jun 25 19:08:14.680223 containerd[1458]: time="2024-06-25T19:08:14.680187264Z" level=info msg="CreateContainer within sandbox \"33dddd0be8dc1f52683b792fb295acbd261309ec15ed07830594ad5c7d73340c\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"84cbec1a3ce10ff38a42503c9d2d174f0c0fd507806015386a9bea43633c1464\"" Jun 25 19:08:14.681536 containerd[1458]: time="2024-06-25T19:08:14.681512329Z" level=info msg="StartContainer for \"84cbec1a3ce10ff38a42503c9d2d174f0c0fd507806015386a9bea43633c1464\"" Jun 25 19:08:14.715898 systemd[1]: Started cri-containerd-84cbec1a3ce10ff38a42503c9d2d174f0c0fd507806015386a9bea43633c1464.scope - libcontainer container 84cbec1a3ce10ff38a42503c9d2d174f0c0fd507806015386a9bea43633c1464. Jun 25 19:08:14.756898 containerd[1458]: time="2024-06-25T19:08:14.756789830Z" level=info msg="StartContainer for \"84cbec1a3ce10ff38a42503c9d2d174f0c0fd507806015386a9bea43633c1464\" returns successfully" Jun 25 19:08:14.759362 containerd[1458]: time="2024-06-25T19:08:14.759322801Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Jun 25 19:08:15.957390 systemd-networkd[1361]: vxlan.calico: Gained IPv6LL Jun 25 19:08:16.760365 containerd[1458]: time="2024-06-25T19:08:16.760325605Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 19:08:16.761601 containerd[1458]: time="2024-06-25T19:08:16.761565120Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=10147655" Jun 25 19:08:16.762561 containerd[1458]: time="2024-06-25T19:08:16.762515241Z" level=info msg="ImageCreate event name:\"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 19:08:16.766030 containerd[1458]: time="2024-06-25T19:08:16.765268634Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 19:08:16.766030 containerd[1458]: time="2024-06-25T19:08:16.765906300Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"11595367\" in 2.006548844s" Jun 25 19:08:16.766030 containerd[1458]: time="2024-06-25T19:08:16.765935485Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\"" Jun 25 19:08:16.768256 containerd[1458]: time="2024-06-25T19:08:16.768230770Z" level=info msg="CreateContainer within sandbox \"33dddd0be8dc1f52683b792fb295acbd261309ec15ed07830594ad5c7d73340c\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jun 25 19:08:16.801392 containerd[1458]: time="2024-06-25T19:08:16.801356162Z" level=info msg="CreateContainer within sandbox \"33dddd0be8dc1f52683b792fb295acbd261309ec15ed07830594ad5c7d73340c\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"0fd0149d1bf4bf297740ad17cf36527a65d06218cbd7c09fe2e226368efe2b52\"" Jun 25 19:08:16.802513 containerd[1458]: time="2024-06-25T19:08:16.802492373Z" level=info msg="StartContainer for \"0fd0149d1bf4bf297740ad17cf36527a65d06218cbd7c09fe2e226368efe2b52\"" Jun 25 19:08:16.843374 systemd[1]: Started cri-containerd-0fd0149d1bf4bf297740ad17cf36527a65d06218cbd7c09fe2e226368efe2b52.scope - libcontainer container 0fd0149d1bf4bf297740ad17cf36527a65d06218cbd7c09fe2e226368efe2b52. Jun 25 19:08:16.878841 containerd[1458]: time="2024-06-25T19:08:16.878801130Z" level=info msg="StartContainer for \"0fd0149d1bf4bf297740ad17cf36527a65d06218cbd7c09fe2e226368efe2b52\" returns successfully" Jun 25 19:08:17.634392 kubelet[2569]: I0625 19:08:17.634347 2569 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jun 25 19:08:17.652033 kubelet[2569]: I0625 19:08:17.651915 2569 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jun 25 19:08:18.344348 containerd[1458]: time="2024-06-25T19:08:18.344277485Z" level=info msg="StopPodSandbox for \"4e5c4a18bda1829fd9a90bce9a6cada7714a2e297fa9879b3f0bcddfa4e3996c\"" Jun 25 19:08:18.445492 containerd[1458]: 2024-06-25 19:08:18.403 [WARNING][4946] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4e5c4a18bda1829fd9a90bce9a6cada7714a2e297fa9879b3f0bcddfa4e3996c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012--0--0--8--d63f105dc7.novalocal-k8s-calico--kube--controllers--948b949f9--lffnp-eth0", GenerateName:"calico-kube-controllers-948b949f9-", Namespace:"calico-system", SelfLink:"", UID:"64e26d3e-5506-4e69-921b-3b06d3154cdc", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 19, 7, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"948b949f9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012-0-0-8-d63f105dc7.novalocal", ContainerID:"50dea36c846fadf2031446699ec3963cb85f66bd56ec7f129a950f77a25d444c", Pod:"calico-kube-controllers-948b949f9-lffnp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.85.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali547a8d41047", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 19:08:18.445492 containerd[1458]: 2024-06-25 19:08:18.403 [INFO][4946] k8s.go 608: Cleaning up netns ContainerID="4e5c4a18bda1829fd9a90bce9a6cada7714a2e297fa9879b3f0bcddfa4e3996c" Jun 25 19:08:18.445492 containerd[1458]: 2024-06-25 19:08:18.403 [INFO][4946] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="4e5c4a18bda1829fd9a90bce9a6cada7714a2e297fa9879b3f0bcddfa4e3996c" iface="eth0" netns="" Jun 25 19:08:18.445492 containerd[1458]: 2024-06-25 19:08:18.403 [INFO][4946] k8s.go 615: Releasing IP address(es) ContainerID="4e5c4a18bda1829fd9a90bce9a6cada7714a2e297fa9879b3f0bcddfa4e3996c" Jun 25 19:08:18.445492 containerd[1458]: 2024-06-25 19:08:18.403 [INFO][4946] utils.go 188: Calico CNI releasing IP address ContainerID="4e5c4a18bda1829fd9a90bce9a6cada7714a2e297fa9879b3f0bcddfa4e3996c" Jun 25 19:08:18.445492 containerd[1458]: 2024-06-25 19:08:18.432 [INFO][4952] ipam_plugin.go 411: Releasing address using handleID ContainerID="4e5c4a18bda1829fd9a90bce9a6cada7714a2e297fa9879b3f0bcddfa4e3996c" HandleID="k8s-pod-network.4e5c4a18bda1829fd9a90bce9a6cada7714a2e297fa9879b3f0bcddfa4e3996c" Workload="ci--4012--0--0--8--d63f105dc7.novalocal-k8s-calico--kube--controllers--948b949f9--lffnp-eth0" Jun 25 19:08:18.445492 containerd[1458]: 2024-06-25 19:08:18.432 [INFO][4952] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 19:08:18.445492 containerd[1458]: 2024-06-25 19:08:18.432 [INFO][4952] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 19:08:18.445492 containerd[1458]: 2024-06-25 19:08:18.440 [WARNING][4952] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="4e5c4a18bda1829fd9a90bce9a6cada7714a2e297fa9879b3f0bcddfa4e3996c" HandleID="k8s-pod-network.4e5c4a18bda1829fd9a90bce9a6cada7714a2e297fa9879b3f0bcddfa4e3996c" Workload="ci--4012--0--0--8--d63f105dc7.novalocal-k8s-calico--kube--controllers--948b949f9--lffnp-eth0" Jun 25 19:08:18.445492 containerd[1458]: 2024-06-25 19:08:18.440 [INFO][4952] ipam_plugin.go 439: Releasing address using workloadID ContainerID="4e5c4a18bda1829fd9a90bce9a6cada7714a2e297fa9879b3f0bcddfa4e3996c" HandleID="k8s-pod-network.4e5c4a18bda1829fd9a90bce9a6cada7714a2e297fa9879b3f0bcddfa4e3996c" Workload="ci--4012--0--0--8--d63f105dc7.novalocal-k8s-calico--kube--controllers--948b949f9--lffnp-eth0" Jun 25 19:08:18.445492 containerd[1458]: 2024-06-25 19:08:18.442 [INFO][4952] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 19:08:18.445492 containerd[1458]: 2024-06-25 19:08:18.443 [INFO][4946] k8s.go 621: Teardown processing complete. ContainerID="4e5c4a18bda1829fd9a90bce9a6cada7714a2e297fa9879b3f0bcddfa4e3996c" Jun 25 19:08:18.445492 containerd[1458]: time="2024-06-25T19:08:18.445477653Z" level=info msg="TearDown network for sandbox \"4e5c4a18bda1829fd9a90bce9a6cada7714a2e297fa9879b3f0bcddfa4e3996c\" successfully" Jun 25 19:08:18.446041 containerd[1458]: time="2024-06-25T19:08:18.445504414Z" level=info msg="StopPodSandbox for \"4e5c4a18bda1829fd9a90bce9a6cada7714a2e297fa9879b3f0bcddfa4e3996c\" returns successfully" Jun 25 19:08:18.451699 containerd[1458]: time="2024-06-25T19:08:18.451197069Z" level=info msg="RemovePodSandbox for \"4e5c4a18bda1829fd9a90bce9a6cada7714a2e297fa9879b3f0bcddfa4e3996c\"" Jun 25 19:08:18.455285 containerd[1458]: time="2024-06-25T19:08:18.455259769Z" level=info msg="Forcibly stopping sandbox \"4e5c4a18bda1829fd9a90bce9a6cada7714a2e297fa9879b3f0bcddfa4e3996c\"" Jun 25 19:08:18.568712 containerd[1458]: 2024-06-25 19:08:18.533 [WARNING][4970] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4e5c4a18bda1829fd9a90bce9a6cada7714a2e297fa9879b3f0bcddfa4e3996c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012--0--0--8--d63f105dc7.novalocal-k8s-calico--kube--controllers--948b949f9--lffnp-eth0", GenerateName:"calico-kube-controllers-948b949f9-", Namespace:"calico-system", SelfLink:"", UID:"64e26d3e-5506-4e69-921b-3b06d3154cdc", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 19, 7, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"948b949f9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012-0-0-8-d63f105dc7.novalocal", ContainerID:"50dea36c846fadf2031446699ec3963cb85f66bd56ec7f129a950f77a25d444c", Pod:"calico-kube-controllers-948b949f9-lffnp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.85.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali547a8d41047", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 19:08:18.568712 containerd[1458]: 2024-06-25 19:08:18.533 [INFO][4970] k8s.go 608: Cleaning up netns ContainerID="4e5c4a18bda1829fd9a90bce9a6cada7714a2e297fa9879b3f0bcddfa4e3996c" Jun 25 19:08:18.568712 containerd[1458]: 2024-06-25 19:08:18.533 [INFO][4970] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="4e5c4a18bda1829fd9a90bce9a6cada7714a2e297fa9879b3f0bcddfa4e3996c" iface="eth0" netns="" Jun 25 19:08:18.568712 containerd[1458]: 2024-06-25 19:08:18.533 [INFO][4970] k8s.go 615: Releasing IP address(es) ContainerID="4e5c4a18bda1829fd9a90bce9a6cada7714a2e297fa9879b3f0bcddfa4e3996c" Jun 25 19:08:18.568712 containerd[1458]: 2024-06-25 19:08:18.533 [INFO][4970] utils.go 188: Calico CNI releasing IP address ContainerID="4e5c4a18bda1829fd9a90bce9a6cada7714a2e297fa9879b3f0bcddfa4e3996c" Jun 25 19:08:18.568712 containerd[1458]: 2024-06-25 19:08:18.555 [INFO][4976] ipam_plugin.go 411: Releasing address using handleID ContainerID="4e5c4a18bda1829fd9a90bce9a6cada7714a2e297fa9879b3f0bcddfa4e3996c" HandleID="k8s-pod-network.4e5c4a18bda1829fd9a90bce9a6cada7714a2e297fa9879b3f0bcddfa4e3996c" Workload="ci--4012--0--0--8--d63f105dc7.novalocal-k8s-calico--kube--controllers--948b949f9--lffnp-eth0" Jun 25 19:08:18.568712 containerd[1458]: 2024-06-25 19:08:18.555 [INFO][4976] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 19:08:18.568712 containerd[1458]: 2024-06-25 19:08:18.555 [INFO][4976] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 19:08:18.568712 containerd[1458]: 2024-06-25 19:08:18.564 [WARNING][4976] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="4e5c4a18bda1829fd9a90bce9a6cada7714a2e297fa9879b3f0bcddfa4e3996c" HandleID="k8s-pod-network.4e5c4a18bda1829fd9a90bce9a6cada7714a2e297fa9879b3f0bcddfa4e3996c" Workload="ci--4012--0--0--8--d63f105dc7.novalocal-k8s-calico--kube--controllers--948b949f9--lffnp-eth0" Jun 25 19:08:18.568712 containerd[1458]: 2024-06-25 19:08:18.564 [INFO][4976] ipam_plugin.go 439: Releasing address using workloadID ContainerID="4e5c4a18bda1829fd9a90bce9a6cada7714a2e297fa9879b3f0bcddfa4e3996c" HandleID="k8s-pod-network.4e5c4a18bda1829fd9a90bce9a6cada7714a2e297fa9879b3f0bcddfa4e3996c" Workload="ci--4012--0--0--8--d63f105dc7.novalocal-k8s-calico--kube--controllers--948b949f9--lffnp-eth0" Jun 25 19:08:18.568712 containerd[1458]: 2024-06-25 19:08:18.565 [INFO][4976] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 19:08:18.568712 containerd[1458]: 2024-06-25 19:08:18.567 [INFO][4970] k8s.go 621: Teardown processing complete. ContainerID="4e5c4a18bda1829fd9a90bce9a6cada7714a2e297fa9879b3f0bcddfa4e3996c" Jun 25 19:08:18.569728 containerd[1458]: time="2024-06-25T19:08:18.568684523Z" level=info msg="TearDown network for sandbox \"4e5c4a18bda1829fd9a90bce9a6cada7714a2e297fa9879b3f0bcddfa4e3996c\" successfully" Jun 25 19:08:18.588579 containerd[1458]: time="2024-06-25T19:08:18.588376102Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4e5c4a18bda1829fd9a90bce9a6cada7714a2e297fa9879b3f0bcddfa4e3996c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 19:08:18.588579 containerd[1458]: time="2024-06-25T19:08:18.588467053Z" level=info msg="RemovePodSandbox \"4e5c4a18bda1829fd9a90bce9a6cada7714a2e297fa9879b3f0bcddfa4e3996c\" returns successfully" Jun 25 19:08:18.589385 containerd[1458]: time="2024-06-25T19:08:18.589019269Z" level=info msg="StopPodSandbox for \"8e7bd251aa5fd8f9fbf8e355e4a1dc2e66c2015d26fae3df75679aff56de64ba\"" Jun 25 19:08:18.589385 containerd[1458]: time="2024-06-25T19:08:18.589118825Z" level=info msg="TearDown network for sandbox \"8e7bd251aa5fd8f9fbf8e355e4a1dc2e66c2015d26fae3df75679aff56de64ba\" successfully" Jun 25 19:08:18.589385 containerd[1458]: time="2024-06-25T19:08:18.589131930Z" level=info msg="StopPodSandbox for \"8e7bd251aa5fd8f9fbf8e355e4a1dc2e66c2015d26fae3df75679aff56de64ba\" returns successfully" Jun 25 19:08:18.589677 containerd[1458]: time="2024-06-25T19:08:18.589640193Z" level=info msg="RemovePodSandbox for \"8e7bd251aa5fd8f9fbf8e355e4a1dc2e66c2015d26fae3df75679aff56de64ba\"" Jun 25 19:08:18.600787 containerd[1458]: time="2024-06-25T19:08:18.589667515Z" level=info msg="Forcibly stopping sandbox \"8e7bd251aa5fd8f9fbf8e355e4a1dc2e66c2015d26fae3df75679aff56de64ba\"" Jun 25 19:08:18.600787 containerd[1458]: time="2024-06-25T19:08:18.599917987Z" level=info msg="TearDown network for sandbox \"8e7bd251aa5fd8f9fbf8e355e4a1dc2e66c2015d26fae3df75679aff56de64ba\" successfully" Jun 25 19:08:18.608438 containerd[1458]: time="2024-06-25T19:08:18.608275942Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8e7bd251aa5fd8f9fbf8e355e4a1dc2e66c2015d26fae3df75679aff56de64ba\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 19:08:18.608438 containerd[1458]: time="2024-06-25T19:08:18.608348668Z" level=info msg="RemovePodSandbox \"8e7bd251aa5fd8f9fbf8e355e4a1dc2e66c2015d26fae3df75679aff56de64ba\" returns successfully" Jun 25 19:08:18.609207 containerd[1458]: time="2024-06-25T19:08:18.608969202Z" level=info msg="StopPodSandbox for \"e47b070cb38fa78d406e523889a4b2713f6ccbe63fd4df04b2ffa250c8dd8a2b\"" Jun 25 19:08:18.698813 containerd[1458]: 2024-06-25 19:08:18.656 [WARNING][4995] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e47b070cb38fa78d406e523889a4b2713f6ccbe63fd4df04b2ffa250c8dd8a2b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012--0--0--8--d63f105dc7.novalocal-k8s-coredns--5dd5756b68--wkwjk-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"b8d477a9-5ba0-42a2-8679-d220a4893fb5", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 19, 7, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012-0-0-8-d63f105dc7.novalocal", ContainerID:"ca9b02efa149b18bc47b51f01a615be880e2ac25d76d65658201ec3d88d475a3", Pod:"coredns-5dd5756b68-wkwjk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.85.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicb7ca39abc9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 19:08:18.698813 containerd[1458]: 2024-06-25 19:08:18.656 [INFO][4995] k8s.go 608: Cleaning up netns ContainerID="e47b070cb38fa78d406e523889a4b2713f6ccbe63fd4df04b2ffa250c8dd8a2b" Jun 25 19:08:18.698813 containerd[1458]: 2024-06-25 19:08:18.656 [INFO][4995] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="e47b070cb38fa78d406e523889a4b2713f6ccbe63fd4df04b2ffa250c8dd8a2b" iface="eth0" netns="" Jun 25 19:08:18.698813 containerd[1458]: 2024-06-25 19:08:18.656 [INFO][4995] k8s.go 615: Releasing IP address(es) ContainerID="e47b070cb38fa78d406e523889a4b2713f6ccbe63fd4df04b2ffa250c8dd8a2b" Jun 25 19:08:18.698813 containerd[1458]: 2024-06-25 19:08:18.656 [INFO][4995] utils.go 188: Calico CNI releasing IP address ContainerID="e47b070cb38fa78d406e523889a4b2713f6ccbe63fd4df04b2ffa250c8dd8a2b" Jun 25 19:08:18.698813 containerd[1458]: 2024-06-25 19:08:18.681 [INFO][5003] ipam_plugin.go 411: Releasing address using handleID ContainerID="e47b070cb38fa78d406e523889a4b2713f6ccbe63fd4df04b2ffa250c8dd8a2b" HandleID="k8s-pod-network.e47b070cb38fa78d406e523889a4b2713f6ccbe63fd4df04b2ffa250c8dd8a2b" Workload="ci--4012--0--0--8--d63f105dc7.novalocal-k8s-coredns--5dd5756b68--wkwjk-eth0" Jun 25 19:08:18.698813 containerd[1458]: 2024-06-25 19:08:18.681 [INFO][5003] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 19:08:18.698813 containerd[1458]: 2024-06-25 19:08:18.681 [INFO][5003] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 19:08:18.698813 containerd[1458]: 2024-06-25 19:08:18.693 [WARNING][5003] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="e47b070cb38fa78d406e523889a4b2713f6ccbe63fd4df04b2ffa250c8dd8a2b" HandleID="k8s-pod-network.e47b070cb38fa78d406e523889a4b2713f6ccbe63fd4df04b2ffa250c8dd8a2b" Workload="ci--4012--0--0--8--d63f105dc7.novalocal-k8s-coredns--5dd5756b68--wkwjk-eth0" Jun 25 19:08:18.698813 containerd[1458]: 2024-06-25 19:08:18.693 [INFO][5003] ipam_plugin.go 439: Releasing address using workloadID ContainerID="e47b070cb38fa78d406e523889a4b2713f6ccbe63fd4df04b2ffa250c8dd8a2b" HandleID="k8s-pod-network.e47b070cb38fa78d406e523889a4b2713f6ccbe63fd4df04b2ffa250c8dd8a2b" Workload="ci--4012--0--0--8--d63f105dc7.novalocal-k8s-coredns--5dd5756b68--wkwjk-eth0" Jun 25 19:08:18.698813 containerd[1458]: 2024-06-25 19:08:18.695 [INFO][5003] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 19:08:18.698813 containerd[1458]: 2024-06-25 19:08:18.697 [INFO][4995] k8s.go 621: Teardown processing complete. ContainerID="e47b070cb38fa78d406e523889a4b2713f6ccbe63fd4df04b2ffa250c8dd8a2b" Jun 25 19:08:18.698813 containerd[1458]: time="2024-06-25T19:08:18.698552205Z" level=info msg="TearDown network for sandbox \"e47b070cb38fa78d406e523889a4b2713f6ccbe63fd4df04b2ffa250c8dd8a2b\" successfully" Jun 25 19:08:18.698813 containerd[1458]: time="2024-06-25T19:08:18.698575639Z" level=info msg="StopPodSandbox for \"e47b070cb38fa78d406e523889a4b2713f6ccbe63fd4df04b2ffa250c8dd8a2b\" returns successfully" Jun 25 19:08:18.700793 containerd[1458]: time="2024-06-25T19:08:18.699994108Z" level=info msg="RemovePodSandbox for \"e47b070cb38fa78d406e523889a4b2713f6ccbe63fd4df04b2ffa250c8dd8a2b\"" Jun 25 19:08:18.700793 containerd[1458]: time="2024-06-25T19:08:18.700026308Z" level=info msg="Forcibly stopping sandbox \"e47b070cb38fa78d406e523889a4b2713f6ccbe63fd4df04b2ffa250c8dd8a2b\"" Jun 25 19:08:18.783677 containerd[1458]: 2024-06-25 19:08:18.748 [WARNING][5021] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e47b070cb38fa78d406e523889a4b2713f6ccbe63fd4df04b2ffa250c8dd8a2b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012--0--0--8--d63f105dc7.novalocal-k8s-coredns--5dd5756b68--wkwjk-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"b8d477a9-5ba0-42a2-8679-d220a4893fb5", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 19, 7, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012-0-0-8-d63f105dc7.novalocal", ContainerID:"ca9b02efa149b18bc47b51f01a615be880e2ac25d76d65658201ec3d88d475a3", Pod:"coredns-5dd5756b68-wkwjk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.85.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicb7ca39abc9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 19:08:18.783677 containerd[1458]: 2024-06-25 19:08:18.748 [INFO][5021] k8s.go 608: Cleaning up netns ContainerID="e47b070cb38fa78d406e523889a4b2713f6ccbe63fd4df04b2ffa250c8dd8a2b" Jun 25 19:08:18.783677 containerd[1458]: 2024-06-25 19:08:18.748 [INFO][5021] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="e47b070cb38fa78d406e523889a4b2713f6ccbe63fd4df04b2ffa250c8dd8a2b" iface="eth0" netns="" Jun 25 19:08:18.783677 containerd[1458]: 2024-06-25 19:08:18.748 [INFO][5021] k8s.go 615: Releasing IP address(es) ContainerID="e47b070cb38fa78d406e523889a4b2713f6ccbe63fd4df04b2ffa250c8dd8a2b" Jun 25 19:08:18.783677 containerd[1458]: 2024-06-25 19:08:18.748 [INFO][5021] utils.go 188: Calico CNI releasing IP address ContainerID="e47b070cb38fa78d406e523889a4b2713f6ccbe63fd4df04b2ffa250c8dd8a2b" Jun 25 19:08:18.783677 containerd[1458]: 2024-06-25 19:08:18.772 [INFO][5028] ipam_plugin.go 411: Releasing address using handleID ContainerID="e47b070cb38fa78d406e523889a4b2713f6ccbe63fd4df04b2ffa250c8dd8a2b" HandleID="k8s-pod-network.e47b070cb38fa78d406e523889a4b2713f6ccbe63fd4df04b2ffa250c8dd8a2b" Workload="ci--4012--0--0--8--d63f105dc7.novalocal-k8s-coredns--5dd5756b68--wkwjk-eth0" Jun 25 19:08:18.783677 containerd[1458]: 2024-06-25 19:08:18.772 [INFO][5028] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 19:08:18.783677 containerd[1458]: 2024-06-25 19:08:18.772 [INFO][5028] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 19:08:18.783677 containerd[1458]: 2024-06-25 19:08:18.778 [WARNING][5028] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="e47b070cb38fa78d406e523889a4b2713f6ccbe63fd4df04b2ffa250c8dd8a2b" HandleID="k8s-pod-network.e47b070cb38fa78d406e523889a4b2713f6ccbe63fd4df04b2ffa250c8dd8a2b" Workload="ci--4012--0--0--8--d63f105dc7.novalocal-k8s-coredns--5dd5756b68--wkwjk-eth0" Jun 25 19:08:18.783677 containerd[1458]: 2024-06-25 19:08:18.778 [INFO][5028] ipam_plugin.go 439: Releasing address using workloadID ContainerID="e47b070cb38fa78d406e523889a4b2713f6ccbe63fd4df04b2ffa250c8dd8a2b" HandleID="k8s-pod-network.e47b070cb38fa78d406e523889a4b2713f6ccbe63fd4df04b2ffa250c8dd8a2b" Workload="ci--4012--0--0--8--d63f105dc7.novalocal-k8s-coredns--5dd5756b68--wkwjk-eth0" Jun 25 19:08:18.783677 containerd[1458]: 2024-06-25 19:08:18.780 [INFO][5028] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 19:08:18.783677 containerd[1458]: 2024-06-25 19:08:18.782 [INFO][5021] k8s.go 621: Teardown processing complete. ContainerID="e47b070cb38fa78d406e523889a4b2713f6ccbe63fd4df04b2ffa250c8dd8a2b" Jun 25 19:08:18.784208 containerd[1458]: time="2024-06-25T19:08:18.783796960Z" level=info msg="TearDown network for sandbox \"e47b070cb38fa78d406e523889a4b2713f6ccbe63fd4df04b2ffa250c8dd8a2b\" successfully" Jun 25 19:08:18.787884 containerd[1458]: time="2024-06-25T19:08:18.787854781Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e47b070cb38fa78d406e523889a4b2713f6ccbe63fd4df04b2ffa250c8dd8a2b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 19:08:18.788017 containerd[1458]: time="2024-06-25T19:08:18.787915735Z" level=info msg="RemovePodSandbox \"e47b070cb38fa78d406e523889a4b2713f6ccbe63fd4df04b2ffa250c8dd8a2b\" returns successfully" Jun 25 19:08:18.788400 containerd[1458]: time="2024-06-25T19:08:18.788325393Z" level=info msg="StopPodSandbox for \"c8fd43baa4dcadd4fc2ab4620969d8fdf98e7e6c4d31ea84d768fa2587290eb4\"" Jun 25 19:08:18.788464 containerd[1458]: time="2024-06-25T19:08:18.788430270Z" level=info msg="TearDown network for sandbox \"c8fd43baa4dcadd4fc2ab4620969d8fdf98e7e6c4d31ea84d768fa2587290eb4\" successfully" Jun 25 19:08:18.788464 containerd[1458]: time="2024-06-25T19:08:18.788445749Z" level=info msg="StopPodSandbox for \"c8fd43baa4dcadd4fc2ab4620969d8fdf98e7e6c4d31ea84d768fa2587290eb4\" returns successfully" Jun 25 19:08:18.789774 containerd[1458]: time="2024-06-25T19:08:18.788996252Z" level=info msg="RemovePodSandbox for \"c8fd43baa4dcadd4fc2ab4620969d8fdf98e7e6c4d31ea84d768fa2587290eb4\"" Jun 25 19:08:18.789774 containerd[1458]: time="2024-06-25T19:08:18.789029865Z" level=info msg="Forcibly stopping sandbox \"c8fd43baa4dcadd4fc2ab4620969d8fdf98e7e6c4d31ea84d768fa2587290eb4\"" Jun 25 19:08:18.789774 containerd[1458]: time="2024-06-25T19:08:18.789110406Z" level=info msg="TearDown network for sandbox \"c8fd43baa4dcadd4fc2ab4620969d8fdf98e7e6c4d31ea84d768fa2587290eb4\" successfully" Jun 25 19:08:18.796824 containerd[1458]: time="2024-06-25T19:08:18.796527215Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c8fd43baa4dcadd4fc2ab4620969d8fdf98e7e6c4d31ea84d768fa2587290eb4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 19:08:18.796824 containerd[1458]: time="2024-06-25T19:08:18.796590353Z" level=info msg="RemovePodSandbox \"c8fd43baa4dcadd4fc2ab4620969d8fdf98e7e6c4d31ea84d768fa2587290eb4\" returns successfully" Jun 25 19:08:18.796929 containerd[1458]: time="2024-06-25T19:08:18.796900375Z" level=info msg="StopPodSandbox for \"6c05d4f7f01b27f0430f187198eb9e1fdf67b785fdd709209337cdfcdc79de47\"" Jun 25 19:08:18.872172 containerd[1458]: 2024-06-25 19:08:18.833 [WARNING][5046] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6c05d4f7f01b27f0430f187198eb9e1fdf67b785fdd709209337cdfcdc79de47" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012--0--0--8--d63f105dc7.novalocal-k8s-coredns--5dd5756b68--kxsw8-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"2a923228-560b-43c0-8f94-d47d8f47139d", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 19, 7, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012-0-0-8-d63f105dc7.novalocal", ContainerID:"75062d478628a3e3fbea57d1c8b36054db38d48a4a3405b60f5650e271bd86fe", Pod:"coredns-5dd5756b68-kxsw8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.85.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic0b49e8d29d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 19:08:18.872172 containerd[1458]: 2024-06-25 19:08:18.834 [INFO][5046] k8s.go 608: Cleaning up netns ContainerID="6c05d4f7f01b27f0430f187198eb9e1fdf67b785fdd709209337cdfcdc79de47" Jun 25 19:08:18.872172 containerd[1458]: 2024-06-25 19:08:18.834 [INFO][5046] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="6c05d4f7f01b27f0430f187198eb9e1fdf67b785fdd709209337cdfcdc79de47" iface="eth0" netns="" Jun 25 19:08:18.872172 containerd[1458]: 2024-06-25 19:08:18.834 [INFO][5046] k8s.go 615: Releasing IP address(es) ContainerID="6c05d4f7f01b27f0430f187198eb9e1fdf67b785fdd709209337cdfcdc79de47" Jun 25 19:08:18.872172 containerd[1458]: 2024-06-25 19:08:18.834 [INFO][5046] utils.go 188: Calico CNI releasing IP address ContainerID="6c05d4f7f01b27f0430f187198eb9e1fdf67b785fdd709209337cdfcdc79de47" Jun 25 19:08:18.872172 containerd[1458]: 2024-06-25 19:08:18.860 [INFO][5052] ipam_plugin.go 411: Releasing address using handleID ContainerID="6c05d4f7f01b27f0430f187198eb9e1fdf67b785fdd709209337cdfcdc79de47" HandleID="k8s-pod-network.6c05d4f7f01b27f0430f187198eb9e1fdf67b785fdd709209337cdfcdc79de47" Workload="ci--4012--0--0--8--d63f105dc7.novalocal-k8s-coredns--5dd5756b68--kxsw8-eth0" Jun 25 19:08:18.872172 containerd[1458]: 2024-06-25 19:08:18.860 [INFO][5052] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 19:08:18.872172 containerd[1458]: 2024-06-25 19:08:18.860 [INFO][5052] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 19:08:18.872172 containerd[1458]: 2024-06-25 19:08:18.867 [WARNING][5052] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="6c05d4f7f01b27f0430f187198eb9e1fdf67b785fdd709209337cdfcdc79de47" HandleID="k8s-pod-network.6c05d4f7f01b27f0430f187198eb9e1fdf67b785fdd709209337cdfcdc79de47" Workload="ci--4012--0--0--8--d63f105dc7.novalocal-k8s-coredns--5dd5756b68--kxsw8-eth0" Jun 25 19:08:18.872172 containerd[1458]: 2024-06-25 19:08:18.867 [INFO][5052] ipam_plugin.go 439: Releasing address using workloadID ContainerID="6c05d4f7f01b27f0430f187198eb9e1fdf67b785fdd709209337cdfcdc79de47" HandleID="k8s-pod-network.6c05d4f7f01b27f0430f187198eb9e1fdf67b785fdd709209337cdfcdc79de47" Workload="ci--4012--0--0--8--d63f105dc7.novalocal-k8s-coredns--5dd5756b68--kxsw8-eth0" Jun 25 19:08:18.872172 containerd[1458]: 2024-06-25 19:08:18.869 [INFO][5052] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 19:08:18.872172 containerd[1458]: 2024-06-25 19:08:18.870 [INFO][5046] k8s.go 621: Teardown processing complete. ContainerID="6c05d4f7f01b27f0430f187198eb9e1fdf67b785fdd709209337cdfcdc79de47" Jun 25 19:08:18.872817 containerd[1458]: time="2024-06-25T19:08:18.872775209Z" level=info msg="TearDown network for sandbox \"6c05d4f7f01b27f0430f187198eb9e1fdf67b785fdd709209337cdfcdc79de47\" successfully" Jun 25 19:08:18.873422 containerd[1458]: time="2024-06-25T19:08:18.872812208Z" level=info msg="StopPodSandbox for \"6c05d4f7f01b27f0430f187198eb9e1fdf67b785fdd709209337cdfcdc79de47\" returns successfully" Jun 25 19:08:18.873557 containerd[1458]: time="2024-06-25T19:08:18.873514464Z" level=info msg="RemovePodSandbox for \"6c05d4f7f01b27f0430f187198eb9e1fdf67b785fdd709209337cdfcdc79de47\"" Jun 25 19:08:18.873557 containerd[1458]: time="2024-06-25T19:08:18.873547096Z" level=info msg="Forcibly stopping sandbox \"6c05d4f7f01b27f0430f187198eb9e1fdf67b785fdd709209337cdfcdc79de47\"" Jun 25 19:08:18.956449 containerd[1458]: 2024-06-25 19:08:18.919 [WARNING][5070] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6c05d4f7f01b27f0430f187198eb9e1fdf67b785fdd709209337cdfcdc79de47" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012--0--0--8--d63f105dc7.novalocal-k8s-coredns--5dd5756b68--kxsw8-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"2a923228-560b-43c0-8f94-d47d8f47139d", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 19, 7, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012-0-0-8-d63f105dc7.novalocal", ContainerID:"75062d478628a3e3fbea57d1c8b36054db38d48a4a3405b60f5650e271bd86fe", Pod:"coredns-5dd5756b68-kxsw8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.85.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic0b49e8d29d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 19:08:18.956449 containerd[1458]: 2024-06-25 19:08:18.919 [INFO][5070] k8s.go 608: Cleaning up netns ContainerID="6c05d4f7f01b27f0430f187198eb9e1fdf67b785fdd709209337cdfcdc79de47" Jun 25 19:08:18.956449 containerd[1458]: 2024-06-25 19:08:18.919 [INFO][5070] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="6c05d4f7f01b27f0430f187198eb9e1fdf67b785fdd709209337cdfcdc79de47" iface="eth0" netns="" Jun 25 19:08:18.956449 containerd[1458]: 2024-06-25 19:08:18.919 [INFO][5070] k8s.go 615: Releasing IP address(es) ContainerID="6c05d4f7f01b27f0430f187198eb9e1fdf67b785fdd709209337cdfcdc79de47" Jun 25 19:08:18.956449 containerd[1458]: 2024-06-25 19:08:18.919 [INFO][5070] utils.go 188: Calico CNI releasing IP address ContainerID="6c05d4f7f01b27f0430f187198eb9e1fdf67b785fdd709209337cdfcdc79de47" Jun 25 19:08:18.956449 containerd[1458]: 2024-06-25 19:08:18.942 [INFO][5077] ipam_plugin.go 411: Releasing address using handleID ContainerID="6c05d4f7f01b27f0430f187198eb9e1fdf67b785fdd709209337cdfcdc79de47" HandleID="k8s-pod-network.6c05d4f7f01b27f0430f187198eb9e1fdf67b785fdd709209337cdfcdc79de47" Workload="ci--4012--0--0--8--d63f105dc7.novalocal-k8s-coredns--5dd5756b68--kxsw8-eth0" Jun 25 19:08:18.956449 containerd[1458]: 2024-06-25 19:08:18.942 [INFO][5077] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 19:08:18.956449 containerd[1458]: 2024-06-25 19:08:18.943 [INFO][5077] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 19:08:18.956449 containerd[1458]: 2024-06-25 19:08:18.951 [WARNING][5077] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="6c05d4f7f01b27f0430f187198eb9e1fdf67b785fdd709209337cdfcdc79de47" HandleID="k8s-pod-network.6c05d4f7f01b27f0430f187198eb9e1fdf67b785fdd709209337cdfcdc79de47" Workload="ci--4012--0--0--8--d63f105dc7.novalocal-k8s-coredns--5dd5756b68--kxsw8-eth0" Jun 25 19:08:18.956449 containerd[1458]: 2024-06-25 19:08:18.951 [INFO][5077] ipam_plugin.go 439: Releasing address using workloadID ContainerID="6c05d4f7f01b27f0430f187198eb9e1fdf67b785fdd709209337cdfcdc79de47" HandleID="k8s-pod-network.6c05d4f7f01b27f0430f187198eb9e1fdf67b785fdd709209337cdfcdc79de47" Workload="ci--4012--0--0--8--d63f105dc7.novalocal-k8s-coredns--5dd5756b68--kxsw8-eth0" Jun 25 19:08:18.956449 containerd[1458]: 2024-06-25 19:08:18.953 [INFO][5077] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 19:08:18.956449 containerd[1458]: 2024-06-25 19:08:18.954 [INFO][5070] k8s.go 621: Teardown processing complete. ContainerID="6c05d4f7f01b27f0430f187198eb9e1fdf67b785fdd709209337cdfcdc79de47" Jun 25 19:08:18.956967 containerd[1458]: time="2024-06-25T19:08:18.956466061Z" level=info msg="TearDown network for sandbox \"6c05d4f7f01b27f0430f187198eb9e1fdf67b785fdd709209337cdfcdc79de47\" successfully" Jun 25 19:08:18.960932 containerd[1458]: time="2024-06-25T19:08:18.960886340Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6c05d4f7f01b27f0430f187198eb9e1fdf67b785fdd709209337cdfcdc79de47\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 19:08:18.960991 containerd[1458]: time="2024-06-25T19:08:18.960944299Z" level=info msg="RemovePodSandbox \"6c05d4f7f01b27f0430f187198eb9e1fdf67b785fdd709209337cdfcdc79de47\" returns successfully" Jun 25 19:08:18.961726 containerd[1458]: time="2024-06-25T19:08:18.961387711Z" level=info msg="StopPodSandbox for \"20e15757b4029b2c9f852fcc0c3c9c79c8e7530ee1bb6f567209fe8c2e6cbe9f\"" Jun 25 19:08:19.032345 containerd[1458]: 2024-06-25 19:08:18.997 [WARNING][5095] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="20e15757b4029b2c9f852fcc0c3c9c79c8e7530ee1bb6f567209fe8c2e6cbe9f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012--0--0--8--d63f105dc7.novalocal-k8s-csi--node--driver--r7zkp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1276367b-2bea-4184-b5ed-849c23171592", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 19, 7, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012-0-0-8-d63f105dc7.novalocal", ContainerID:"33dddd0be8dc1f52683b792fb295acbd261309ec15ed07830594ad5c7d73340c", Pod:"csi-node-driver-r7zkp", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.85.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali84283737640", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 19:08:19.032345 containerd[1458]: 2024-06-25 19:08:18.997 [INFO][5095] k8s.go 608: Cleaning up netns ContainerID="20e15757b4029b2c9f852fcc0c3c9c79c8e7530ee1bb6f567209fe8c2e6cbe9f" Jun 25 19:08:19.032345 containerd[1458]: 2024-06-25 19:08:18.997 [INFO][5095] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="20e15757b4029b2c9f852fcc0c3c9c79c8e7530ee1bb6f567209fe8c2e6cbe9f" iface="eth0" netns="" Jun 25 19:08:19.032345 containerd[1458]: 2024-06-25 19:08:18.997 [INFO][5095] k8s.go 615: Releasing IP address(es) ContainerID="20e15757b4029b2c9f852fcc0c3c9c79c8e7530ee1bb6f567209fe8c2e6cbe9f" Jun 25 19:08:19.032345 containerd[1458]: 2024-06-25 19:08:18.997 [INFO][5095] utils.go 188: Calico CNI releasing IP address ContainerID="20e15757b4029b2c9f852fcc0c3c9c79c8e7530ee1bb6f567209fe8c2e6cbe9f" Jun 25 19:08:19.032345 containerd[1458]: 2024-06-25 19:08:19.019 [INFO][5102] ipam_plugin.go 411: Releasing address using handleID ContainerID="20e15757b4029b2c9f852fcc0c3c9c79c8e7530ee1bb6f567209fe8c2e6cbe9f" HandleID="k8s-pod-network.20e15757b4029b2c9f852fcc0c3c9c79c8e7530ee1bb6f567209fe8c2e6cbe9f" Workload="ci--4012--0--0--8--d63f105dc7.novalocal-k8s-csi--node--driver--r7zkp-eth0" Jun 25 19:08:19.032345 containerd[1458]: 2024-06-25 19:08:19.019 [INFO][5102] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 19:08:19.032345 containerd[1458]: 2024-06-25 19:08:19.019 [INFO][5102] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 19:08:19.032345 containerd[1458]: 2024-06-25 19:08:19.027 [WARNING][5102] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="20e15757b4029b2c9f852fcc0c3c9c79c8e7530ee1bb6f567209fe8c2e6cbe9f" HandleID="k8s-pod-network.20e15757b4029b2c9f852fcc0c3c9c79c8e7530ee1bb6f567209fe8c2e6cbe9f" Workload="ci--4012--0--0--8--d63f105dc7.novalocal-k8s-csi--node--driver--r7zkp-eth0" Jun 25 19:08:19.032345 containerd[1458]: 2024-06-25 19:08:19.027 [INFO][5102] ipam_plugin.go 439: Releasing address using workloadID ContainerID="20e15757b4029b2c9f852fcc0c3c9c79c8e7530ee1bb6f567209fe8c2e6cbe9f" HandleID="k8s-pod-network.20e15757b4029b2c9f852fcc0c3c9c79c8e7530ee1bb6f567209fe8c2e6cbe9f" Workload="ci--4012--0--0--8--d63f105dc7.novalocal-k8s-csi--node--driver--r7zkp-eth0" Jun 25 19:08:19.032345 containerd[1458]: 2024-06-25 19:08:19.029 [INFO][5102] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 19:08:19.032345 containerd[1458]: 2024-06-25 19:08:19.030 [INFO][5095] k8s.go 621: Teardown processing complete. ContainerID="20e15757b4029b2c9f852fcc0c3c9c79c8e7530ee1bb6f567209fe8c2e6cbe9f" Jun 25 19:08:19.034297 containerd[1458]: time="2024-06-25T19:08:19.032385699Z" level=info msg="TearDown network for sandbox \"20e15757b4029b2c9f852fcc0c3c9c79c8e7530ee1bb6f567209fe8c2e6cbe9f\" successfully" Jun 25 19:08:19.034297 containerd[1458]: time="2024-06-25T19:08:19.032424182Z" level=info msg="StopPodSandbox for \"20e15757b4029b2c9f852fcc0c3c9c79c8e7530ee1bb6f567209fe8c2e6cbe9f\" returns successfully" Jun 25 19:08:19.034297 containerd[1458]: time="2024-06-25T19:08:19.033337725Z" level=info msg="RemovePodSandbox for \"20e15757b4029b2c9f852fcc0c3c9c79c8e7530ee1bb6f567209fe8c2e6cbe9f\"" Jun 25 19:08:19.034297 containerd[1458]: time="2024-06-25T19:08:19.033369284Z" level=info msg="Forcibly stopping sandbox \"20e15757b4029b2c9f852fcc0c3c9c79c8e7530ee1bb6f567209fe8c2e6cbe9f\"" Jun 25 19:08:19.111129 containerd[1458]: 2024-06-25 19:08:19.075 [WARNING][5120] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="20e15757b4029b2c9f852fcc0c3c9c79c8e7530ee1bb6f567209fe8c2e6cbe9f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012--0--0--8--d63f105dc7.novalocal-k8s-csi--node--driver--r7zkp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1276367b-2bea-4184-b5ed-849c23171592", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 19, 7, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012-0-0-8-d63f105dc7.novalocal", ContainerID:"33dddd0be8dc1f52683b792fb295acbd261309ec15ed07830594ad5c7d73340c", Pod:"csi-node-driver-r7zkp", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.85.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali84283737640", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 19:08:19.111129 containerd[1458]: 2024-06-25 19:08:19.075 [INFO][5120] k8s.go 608: Cleaning up netns ContainerID="20e15757b4029b2c9f852fcc0c3c9c79c8e7530ee1bb6f567209fe8c2e6cbe9f" Jun 25 19:08:19.111129 containerd[1458]: 2024-06-25 19:08:19.076 [INFO][5120] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="20e15757b4029b2c9f852fcc0c3c9c79c8e7530ee1bb6f567209fe8c2e6cbe9f" iface="eth0" netns="" Jun 25 19:08:19.111129 containerd[1458]: 2024-06-25 19:08:19.076 [INFO][5120] k8s.go 615: Releasing IP address(es) ContainerID="20e15757b4029b2c9f852fcc0c3c9c79c8e7530ee1bb6f567209fe8c2e6cbe9f" Jun 25 19:08:19.111129 containerd[1458]: 2024-06-25 19:08:19.076 [INFO][5120] utils.go 188: Calico CNI releasing IP address ContainerID="20e15757b4029b2c9f852fcc0c3c9c79c8e7530ee1bb6f567209fe8c2e6cbe9f" Jun 25 19:08:19.111129 containerd[1458]: 2024-06-25 19:08:19.099 [INFO][5126] ipam_plugin.go 411: Releasing address using handleID ContainerID="20e15757b4029b2c9f852fcc0c3c9c79c8e7530ee1bb6f567209fe8c2e6cbe9f" HandleID="k8s-pod-network.20e15757b4029b2c9f852fcc0c3c9c79c8e7530ee1bb6f567209fe8c2e6cbe9f" Workload="ci--4012--0--0--8--d63f105dc7.novalocal-k8s-csi--node--driver--r7zkp-eth0" Jun 25 19:08:19.111129 containerd[1458]: 2024-06-25 19:08:19.099 [INFO][5126] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 19:08:19.111129 containerd[1458]: 2024-06-25 19:08:19.099 [INFO][5126] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 19:08:19.111129 containerd[1458]: 2024-06-25 19:08:19.106 [WARNING][5126] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="20e15757b4029b2c9f852fcc0c3c9c79c8e7530ee1bb6f567209fe8c2e6cbe9f" HandleID="k8s-pod-network.20e15757b4029b2c9f852fcc0c3c9c79c8e7530ee1bb6f567209fe8c2e6cbe9f" Workload="ci--4012--0--0--8--d63f105dc7.novalocal-k8s-csi--node--driver--r7zkp-eth0" Jun 25 19:08:19.111129 containerd[1458]: 2024-06-25 19:08:19.106 [INFO][5126] ipam_plugin.go 439: Releasing address using workloadID ContainerID="20e15757b4029b2c9f852fcc0c3c9c79c8e7530ee1bb6f567209fe8c2e6cbe9f" HandleID="k8s-pod-network.20e15757b4029b2c9f852fcc0c3c9c79c8e7530ee1bb6f567209fe8c2e6cbe9f" Workload="ci--4012--0--0--8--d63f105dc7.novalocal-k8s-csi--node--driver--r7zkp-eth0" Jun 25 19:08:19.111129 containerd[1458]: 2024-06-25 19:08:19.108 [INFO][5126] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 19:08:19.111129 containerd[1458]: 2024-06-25 19:08:19.109 [INFO][5120] k8s.go 621: Teardown processing complete. ContainerID="20e15757b4029b2c9f852fcc0c3c9c79c8e7530ee1bb6f567209fe8c2e6cbe9f" Jun 25 19:08:19.111812 containerd[1458]: time="2024-06-25T19:08:19.111178146Z" level=info msg="TearDown network for sandbox \"20e15757b4029b2c9f852fcc0c3c9c79c8e7530ee1bb6f567209fe8c2e6cbe9f\" successfully" Jun 25 19:08:19.115044 containerd[1458]: time="2024-06-25T19:08:19.115013910Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"20e15757b4029b2c9f852fcc0c3c9c79c8e7530ee1bb6f567209fe8c2e6cbe9f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 19:08:19.115122 containerd[1458]: time="2024-06-25T19:08:19.115070545Z" level=info msg="RemovePodSandbox \"20e15757b4029b2c9f852fcc0c3c9c79c8e7530ee1bb6f567209fe8c2e6cbe9f\" returns successfully" Jun 25 19:08:22.325898 systemd[1]: run-containerd-runc-k8s.io-df452c8056b14f7bd291647469ab992ecd3885e562d075d8bf54af791f3d4185-runc.f6UMMy.mount: Deactivated successfully. Jun 25 19:08:33.285060 systemd[1]: Started sshd@7-172.24.4.61:22-172.24.4.1:56860.service - OpenSSH per-connection server daemon (172.24.4.1:56860). Jun 25 19:08:34.877165 sshd[5202]: Accepted publickey for core from 172.24.4.1 port 56860 ssh2: RSA SHA256:otxWgi1QNrVHlA+DL2lID1btX/FnfujF3xA/xUdUjyI Jun 25 19:08:34.885193 sshd[5202]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 19:08:34.897803 systemd-logind[1434]: New session 10 of user core. Jun 25 19:08:34.904067 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 25 19:08:36.289499 sshd[5202]: pam_unix(sshd:session): session closed for user core Jun 25 19:08:36.302713 systemd[1]: sshd@7-172.24.4.61:22-172.24.4.1:56860.service: Deactivated successfully. Jun 25 19:08:36.308178 systemd[1]: session-10.scope: Deactivated successfully. Jun 25 19:08:36.311693 systemd-logind[1434]: Session 10 logged out. Waiting for processes to exit. Jun 25 19:08:36.313884 systemd-logind[1434]: Removed session 10. Jun 25 19:08:39.371213 kubelet[2569]: I0625 19:08:39.371174 2569 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-r7zkp" podStartSLOduration=53.930132849 podCreationTimestamp="2024-06-25 19:07:37 +0000 UTC" firstStartedPulling="2024-06-25 19:08:08.325288717 +0000 UTC m=+50.373197839" lastFinishedPulling="2024-06-25 19:08:16.766287325 +0000 UTC m=+58.814196446" observedRunningTime="2024-06-25 19:08:17.918360549 +0000 UTC m=+59.966269720" watchObservedRunningTime="2024-06-25 19:08:39.371131456 +0000 UTC m=+81.419040587" Jun 25 19:08:41.305198 systemd[1]: Started sshd@8-172.24.4.61:22-172.24.4.1:39452.service - OpenSSH per-connection server daemon (172.24.4.1:39452). Jun 25 19:08:42.745694 sshd[5252]: Accepted publickey for core from 172.24.4.1 port 39452 ssh2: RSA SHA256:otxWgi1QNrVHlA+DL2lID1btX/FnfujF3xA/xUdUjyI Jun 25 19:08:42.749180 sshd[5252]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 19:08:42.767523 systemd-logind[1434]: New session 11 of user core. Jun 25 19:08:42.777337 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 25 19:08:43.707103 sshd[5252]: pam_unix(sshd:session): session closed for user core Jun 25 19:08:43.713096 systemd[1]: sshd@8-172.24.4.61:22-172.24.4.1:39452.service: Deactivated successfully. Jun 25 19:08:43.719241 systemd[1]: session-11.scope: Deactivated successfully. Jun 25 19:08:43.720185 systemd-logind[1434]: Session 11 logged out. Waiting for processes to exit. Jun 25 19:08:43.721414 systemd-logind[1434]: Removed session 11. Jun 25 19:08:48.728215 systemd[1]: Started sshd@9-172.24.4.61:22-172.24.4.1:49380.service - OpenSSH per-connection server daemon (172.24.4.1:49380). Jun 25 19:08:50.036715 sshd[5271]: Accepted publickey for core from 172.24.4.1 port 49380 ssh2: RSA SHA256:otxWgi1QNrVHlA+DL2lID1btX/FnfujF3xA/xUdUjyI Jun 25 19:08:50.039257 sshd[5271]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 19:08:50.048886 systemd-logind[1434]: New session 12 of user core. Jun 25 19:08:50.057020 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 25 19:08:50.935298 systemd[1]: Started sshd@10-172.24.4.61:22-172.24.4.1:49388.service - OpenSSH per-connection server daemon (172.24.4.1:49388). Jun 25 19:08:51.079959 sshd[5271]: pam_unix(sshd:session): session closed for user core Jun 25 19:08:51.178942 systemd[1]: sshd@9-172.24.4.61:22-172.24.4.1:49380.service: Deactivated successfully. Jun 25 19:08:51.183932 systemd[1]: session-12.scope: Deactivated successfully. Jun 25 19:08:51.187658 systemd-logind[1434]: Session 12 logged out. Waiting for processes to exit. Jun 25 19:08:51.191544 systemd-logind[1434]: Removed session 12. Jun 25 19:08:52.349435 sshd[5283]: Accepted publickey for core from 172.24.4.1 port 49388 ssh2: RSA SHA256:otxWgi1QNrVHlA+DL2lID1btX/FnfujF3xA/xUdUjyI Jun 25 19:08:52.353852 sshd[5283]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 19:08:52.365255 systemd-logind[1434]: New session 13 of user core. Jun 25 19:08:52.378411 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 25 19:08:53.843192 sshd[5283]: pam_unix(sshd:session): session closed for user core Jun 25 19:08:53.845623 systemd[1]: Started sshd@11-172.24.4.61:22-172.24.4.1:49402.service - OpenSSH per-connection server daemon (172.24.4.1:49402). Jun 25 19:08:53.930644 systemd[1]: run-containerd-runc-k8s.io-df452c8056b14f7bd291647469ab992ecd3885e562d075d8bf54af791f3d4185-runc.b5BcMT.mount: Deactivated successfully. Jun 25 19:08:53.933557 systemd[1]: sshd@10-172.24.4.61:22-172.24.4.1:49388.service: Deactivated successfully. Jun 25 19:08:53.936085 systemd[1]: session-13.scope: Deactivated successfully. Jun 25 19:08:53.939318 systemd-logind[1434]: Session 13 logged out. Waiting for processes to exit. Jun 25 19:08:53.945227 systemd-logind[1434]: Removed session 13. Jun 25 19:08:54.999909 sshd[5294]: Accepted publickey for core from 172.24.4.1 port 49402 ssh2: RSA SHA256:otxWgi1QNrVHlA+DL2lID1btX/FnfujF3xA/xUdUjyI Jun 25 19:08:55.069360 sshd[5294]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 19:08:55.086560 systemd-logind[1434]: New session 14 of user core. Jun 25 19:08:55.096898 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 25 19:08:55.720187 sshd[5294]: pam_unix(sshd:session): session closed for user core Jun 25 19:08:55.725579 systemd[1]: sshd@11-172.24.4.61:22-172.24.4.1:49402.service: Deactivated successfully. Jun 25 19:08:55.730637 systemd[1]: session-14.scope: Deactivated successfully. Jun 25 19:08:55.733140 systemd-logind[1434]: Session 14 logged out. Waiting for processes to exit. Jun 25 19:08:55.734438 systemd-logind[1434]: Removed session 14. Jun 25 19:09:00.744247 systemd[1]: Started sshd@12-172.24.4.61:22-172.24.4.1:34304.service - OpenSSH per-connection server daemon (172.24.4.1:34304). Jun 25 19:09:01.869708 sshd[5346]: Accepted publickey for core from 172.24.4.1 port 34304 ssh2: RSA SHA256:otxWgi1QNrVHlA+DL2lID1btX/FnfujF3xA/xUdUjyI Jun 25 19:09:01.873066 sshd[5346]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 19:09:01.886979 systemd-logind[1434]: New session 15 of user core. Jun 25 19:09:01.893036 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 25 19:09:02.663127 sshd[5346]: pam_unix(sshd:session): session closed for user core Jun 25 19:09:02.674565 systemd[1]: sshd@12-172.24.4.61:22-172.24.4.1:34304.service: Deactivated successfully. Jun 25 19:09:02.681484 systemd[1]: session-15.scope: Deactivated successfully. Jun 25 19:09:02.683698 systemd-logind[1434]: Session 15 logged out. Waiting for processes to exit. Jun 25 19:09:02.695409 systemd[1]: Started sshd@13-172.24.4.61:22-172.24.4.1:34320.service - OpenSSH per-connection server daemon (172.24.4.1:34320). Jun 25 19:09:02.698914 systemd-logind[1434]: Removed session 15. Jun 25 19:09:03.822068 sshd[5361]: Accepted publickey for core from 172.24.4.1 port 34320 ssh2: RSA SHA256:otxWgi1QNrVHlA+DL2lID1btX/FnfujF3xA/xUdUjyI Jun 25 19:09:03.824680 sshd[5361]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 19:09:03.835038 systemd-logind[1434]: New session 16 of user core. Jun 25 19:09:03.839999 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 25 19:09:05.220201 sshd[5361]: pam_unix(sshd:session): session closed for user core Jun 25 19:09:05.230005 systemd[1]: Started sshd@14-172.24.4.61:22-172.24.4.1:39798.service - OpenSSH per-connection server daemon (172.24.4.1:39798). Jun 25 19:09:05.236674 systemd[1]: sshd@13-172.24.4.61:22-172.24.4.1:34320.service: Deactivated successfully. Jun 25 19:09:05.240169 systemd[1]: session-16.scope: Deactivated successfully. Jun 25 19:09:05.243002 systemd-logind[1434]: Session 16 logged out. Waiting for processes to exit. Jun 25 19:09:05.247904 systemd-logind[1434]: Removed session 16. Jun 25 19:09:06.673176 sshd[5370]: Accepted publickey for core from 172.24.4.1 port 39798 ssh2: RSA SHA256:otxWgi1QNrVHlA+DL2lID1btX/FnfujF3xA/xUdUjyI Jun 25 19:09:06.677559 sshd[5370]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 19:09:06.691455 systemd-logind[1434]: New session 17 of user core. Jun 25 19:09:06.697667 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 25 19:09:07.333650 kubelet[2569]: I0625 19:09:07.333140 2569 topology_manager.go:215] "Topology Admit Handler" podUID="323dfadb-f332-4bd6-bbd3-619d392e5916" podNamespace="calico-apiserver" podName="calico-apiserver-6dd65484c7-74dt5" Jun 25 19:09:07.369263 systemd[1]: Created slice kubepods-besteffort-pod323dfadb_f332_4bd6_bbd3_619d392e5916.slice - libcontainer container kubepods-besteffort-pod323dfadb_f332_4bd6_bbd3_619d392e5916.slice. Jun 25 19:09:07.493998 kubelet[2569]: I0625 19:09:07.493929 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkcrk\" (UniqueName: \"kubernetes.io/projected/323dfadb-f332-4bd6-bbd3-619d392e5916-kube-api-access-fkcrk\") pod \"calico-apiserver-6dd65484c7-74dt5\" (UID: \"323dfadb-f332-4bd6-bbd3-619d392e5916\") " pod="calico-apiserver/calico-apiserver-6dd65484c7-74dt5" Jun 25 19:09:07.505673 kubelet[2569]: I0625 19:09:07.505631 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/323dfadb-f332-4bd6-bbd3-619d392e5916-calico-apiserver-certs\") pod \"calico-apiserver-6dd65484c7-74dt5\" (UID: \"323dfadb-f332-4bd6-bbd3-619d392e5916\") " pod="calico-apiserver/calico-apiserver-6dd65484c7-74dt5" Jun 25 19:09:07.678779 containerd[1458]: time="2024-06-25T19:09:07.678621591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dd65484c7-74dt5,Uid:323dfadb-f332-4bd6-bbd3-619d392e5916,Namespace:calico-apiserver,Attempt:0,}" Jun 25 19:09:07.852355 systemd-networkd[1361]: calibe01f9f359f: Link UP Jun 25 19:09:07.854990 systemd-networkd[1361]: calibe01f9f359f: Gained carrier Jun 25 19:09:07.876189 containerd[1458]: 2024-06-25 19:09:07.762 [INFO][5392] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4012--0--0--8--d63f105dc7.novalocal-k8s-calico--apiserver--6dd65484c7--74dt5-eth0 calico-apiserver-6dd65484c7- calico-apiserver 323dfadb-f332-4bd6-bbd3-619d392e5916 1130 0 2024-06-25 19:09:07 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6dd65484c7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4012-0-0-8-d63f105dc7.novalocal calico-apiserver-6dd65484c7-74dt5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calibe01f9f359f [] []}} ContainerID="c76c8bb4aaa202699adbf1a98a55e910e03de70209cef755e0329c7500bdfe73" Namespace="calico-apiserver" Pod="calico-apiserver-6dd65484c7-74dt5" WorkloadEndpoint="ci--4012--0--0--8--d63f105dc7.novalocal-k8s-calico--apiserver--6dd65484c7--74dt5-" Jun 25 19:09:07.876189 containerd[1458]: 2024-06-25 19:09:07.762 [INFO][5392] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c76c8bb4aaa202699adbf1a98a55e910e03de70209cef755e0329c7500bdfe73" Namespace="calico-apiserver" Pod="calico-apiserver-6dd65484c7-74dt5" WorkloadEndpoint="ci--4012--0--0--8--d63f105dc7.novalocal-k8s-calico--apiserver--6dd65484c7--74dt5-eth0" Jun 25 19:09:07.876189 containerd[1458]: 2024-06-25 19:09:07.796 [INFO][5402] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c76c8bb4aaa202699adbf1a98a55e910e03de70209cef755e0329c7500bdfe73" HandleID="k8s-pod-network.c76c8bb4aaa202699adbf1a98a55e910e03de70209cef755e0329c7500bdfe73" Workload="ci--4012--0--0--8--d63f105dc7.novalocal-k8s-calico--apiserver--6dd65484c7--74dt5-eth0" Jun 25 19:09:07.876189 containerd[1458]: 2024-06-25 19:09:07.809 [INFO][5402] ipam_plugin.go 264: Auto assigning IP ContainerID="c76c8bb4aaa202699adbf1a98a55e910e03de70209cef755e0329c7500bdfe73" HandleID="k8s-pod-network.c76c8bb4aaa202699adbf1a98a55e910e03de70209cef755e0329c7500bdfe73" Workload="ci--4012--0--0--8--d63f105dc7.novalocal-k8s-calico--apiserver--6dd65484c7--74dt5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031a2e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4012-0-0-8-d63f105dc7.novalocal", "pod":"calico-apiserver-6dd65484c7-74dt5", "timestamp":"2024-06-25 19:09:07.796689455 +0000 UTC"}, Hostname:"ci-4012-0-0-8-d63f105dc7.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 19:09:07.876189 containerd[1458]: 2024-06-25 19:09:07.809 [INFO][5402] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 19:09:07.876189 containerd[1458]: 2024-06-25 19:09:07.809 [INFO][5402] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 19:09:07.876189 containerd[1458]: 2024-06-25 19:09:07.809 [INFO][5402] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4012-0-0-8-d63f105dc7.novalocal' Jun 25 19:09:07.876189 containerd[1458]: 2024-06-25 19:09:07.811 [INFO][5402] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c76c8bb4aaa202699adbf1a98a55e910e03de70209cef755e0329c7500bdfe73" host="ci-4012-0-0-8-d63f105dc7.novalocal" Jun 25 19:09:07.876189 containerd[1458]: 2024-06-25 19:09:07.816 [INFO][5402] ipam.go 372: Looking up existing affinities for host host="ci-4012-0-0-8-d63f105dc7.novalocal" Jun 25 19:09:07.876189 containerd[1458]: 2024-06-25 19:09:07.821 [INFO][5402] ipam.go 489: Trying affinity for 192.168.85.64/26 host="ci-4012-0-0-8-d63f105dc7.novalocal" Jun 25 19:09:07.876189 containerd[1458]: 2024-06-25 19:09:07.823 [INFO][5402] ipam.go 155: Attempting to load block cidr=192.168.85.64/26 host="ci-4012-0-0-8-d63f105dc7.novalocal" Jun 25 19:09:07.876189 containerd[1458]: 2024-06-25 19:09:07.826 [INFO][5402] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.85.64/26 host="ci-4012-0-0-8-d63f105dc7.novalocal" Jun 25 19:09:07.876189 containerd[1458]: 2024-06-25 19:09:07.826 [INFO][5402] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.85.64/26 handle="k8s-pod-network.c76c8bb4aaa202699adbf1a98a55e910e03de70209cef755e0329c7500bdfe73" host="ci-4012-0-0-8-d63f105dc7.novalocal" Jun 25 19:09:07.876189 containerd[1458]: 2024-06-25 19:09:07.829 [INFO][5402] ipam.go 1685: Creating new handle: k8s-pod-network.c76c8bb4aaa202699adbf1a98a55e910e03de70209cef755e0329c7500bdfe73 Jun 25 19:09:07.876189 containerd[1458]: 2024-06-25 19:09:07.833 [INFO][5402] ipam.go 1203: Writing block in order to claim IPs block=192.168.85.64/26 handle="k8s-pod-network.c76c8bb4aaa202699adbf1a98a55e910e03de70209cef755e0329c7500bdfe73" host="ci-4012-0-0-8-d63f105dc7.novalocal" Jun 25 19:09:07.876189 containerd[1458]: 2024-06-25 19:09:07.839 [INFO][5402] ipam.go 1216: Successfully claimed IPs: [192.168.85.69/26] block=192.168.85.64/26 handle="k8s-pod-network.c76c8bb4aaa202699adbf1a98a55e910e03de70209cef755e0329c7500bdfe73" host="ci-4012-0-0-8-d63f105dc7.novalocal" Jun 25 19:09:07.876189 containerd[1458]: 2024-06-25 19:09:07.839 [INFO][5402] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.85.69/26] handle="k8s-pod-network.c76c8bb4aaa202699adbf1a98a55e910e03de70209cef755e0329c7500bdfe73" host="ci-4012-0-0-8-d63f105dc7.novalocal" Jun 25 19:09:07.876189 containerd[1458]: 2024-06-25 19:09:07.839 [INFO][5402] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 19:09:07.876189 containerd[1458]: 2024-06-25 19:09:07.840 [INFO][5402] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.85.69/26] IPv6=[] ContainerID="c76c8bb4aaa202699adbf1a98a55e910e03de70209cef755e0329c7500bdfe73" HandleID="k8s-pod-network.c76c8bb4aaa202699adbf1a98a55e910e03de70209cef755e0329c7500bdfe73" Workload="ci--4012--0--0--8--d63f105dc7.novalocal-k8s-calico--apiserver--6dd65484c7--74dt5-eth0" Jun 25 19:09:07.880433 containerd[1458]: 2024-06-25 19:09:07.843 [INFO][5392] k8s.go 386: Populated endpoint ContainerID="c76c8bb4aaa202699adbf1a98a55e910e03de70209cef755e0329c7500bdfe73" Namespace="calico-apiserver" Pod="calico-apiserver-6dd65484c7-74dt5" WorkloadEndpoint="ci--4012--0--0--8--d63f105dc7.novalocal-k8s-calico--apiserver--6dd65484c7--74dt5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012--0--0--8--d63f105dc7.novalocal-k8s-calico--apiserver--6dd65484c7--74dt5-eth0", GenerateName:"calico-apiserver-6dd65484c7-", Namespace:"calico-apiserver", SelfLink:"", UID:"323dfadb-f332-4bd6-bbd3-619d392e5916", ResourceVersion:"1130", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 19, 9, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dd65484c7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012-0-0-8-d63f105dc7.novalocal", ContainerID:"", Pod:"calico-apiserver-6dd65484c7-74dt5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.85.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibe01f9f359f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 19:09:07.880433 containerd[1458]: 2024-06-25 19:09:07.843 [INFO][5392] k8s.go 387: Calico CNI using IPs: [192.168.85.69/32] ContainerID="c76c8bb4aaa202699adbf1a98a55e910e03de70209cef755e0329c7500bdfe73" Namespace="calico-apiserver" Pod="calico-apiserver-6dd65484c7-74dt5" WorkloadEndpoint="ci--4012--0--0--8--d63f105dc7.novalocal-k8s-calico--apiserver--6dd65484c7--74dt5-eth0" Jun 25 19:09:07.880433 containerd[1458]: 2024-06-25 19:09:07.843 [INFO][5392] dataplane_linux.go 68: Setting the host side veth name to calibe01f9f359f ContainerID="c76c8bb4aaa202699adbf1a98a55e910e03de70209cef755e0329c7500bdfe73" Namespace="calico-apiserver" Pod="calico-apiserver-6dd65484c7-74dt5" WorkloadEndpoint="ci--4012--0--0--8--d63f105dc7.novalocal-k8s-calico--apiserver--6dd65484c7--74dt5-eth0" Jun 25 19:09:07.880433 containerd[1458]: 2024-06-25 19:09:07.854 [INFO][5392] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="c76c8bb4aaa202699adbf1a98a55e910e03de70209cef755e0329c7500bdfe73" Namespace="calico-apiserver" Pod="calico-apiserver-6dd65484c7-74dt5" WorkloadEndpoint="ci--4012--0--0--8--d63f105dc7.novalocal-k8s-calico--apiserver--6dd65484c7--74dt5-eth0" Jun 25 19:09:07.880433 containerd[1458]: 2024-06-25 19:09:07.855 [INFO][5392] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c76c8bb4aaa202699adbf1a98a55e910e03de70209cef755e0329c7500bdfe73" Namespace="calico-apiserver" Pod="calico-apiserver-6dd65484c7-74dt5" WorkloadEndpoint="ci--4012--0--0--8--d63f105dc7.novalocal-k8s-calico--apiserver--6dd65484c7--74dt5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012--0--0--8--d63f105dc7.novalocal-k8s-calico--apiserver--6dd65484c7--74dt5-eth0", GenerateName:"calico-apiserver-6dd65484c7-", Namespace:"calico-apiserver", SelfLink:"", UID:"323dfadb-f332-4bd6-bbd3-619d392e5916", ResourceVersion:"1130", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 19, 9, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dd65484c7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012-0-0-8-d63f105dc7.novalocal", ContainerID:"c76c8bb4aaa202699adbf1a98a55e910e03de70209cef755e0329c7500bdfe73", Pod:"calico-apiserver-6dd65484c7-74dt5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.85.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibe01f9f359f", MAC:"52:e5:92:2b:38:b5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 19:09:07.880433 containerd[1458]: 2024-06-25 19:09:07.872 [INFO][5392] k8s.go 500: Wrote updated endpoint to datastore ContainerID="c76c8bb4aaa202699adbf1a98a55e910e03de70209cef755e0329c7500bdfe73" Namespace="calico-apiserver" Pod="calico-apiserver-6dd65484c7-74dt5" WorkloadEndpoint="ci--4012--0--0--8--d63f105dc7.novalocal-k8s-calico--apiserver--6dd65484c7--74dt5-eth0" Jun 25 19:09:07.979782 containerd[1458]: time="2024-06-25T19:09:07.977948024Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 19:09:07.979782 containerd[1458]: time="2024-06-25T19:09:07.978013968Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 19:09:07.979782 containerd[1458]: time="2024-06-25T19:09:07.978038374Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 19:09:07.979782 containerd[1458]: time="2024-06-25T19:09:07.978057289Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 19:09:08.021869 systemd[1]: Started cri-containerd-c76c8bb4aaa202699adbf1a98a55e910e03de70209cef755e0329c7500bdfe73.scope - libcontainer container c76c8bb4aaa202699adbf1a98a55e910e03de70209cef755e0329c7500bdfe73. Jun 25 19:09:08.132229 containerd[1458]: time="2024-06-25T19:09:08.132186632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dd65484c7-74dt5,Uid:323dfadb-f332-4bd6-bbd3-619d392e5916,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"c76c8bb4aaa202699adbf1a98a55e910e03de70209cef755e0329c7500bdfe73\"" Jun 25 19:09:08.134879 containerd[1458]: time="2024-06-25T19:09:08.134829246Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Jun 25 19:09:08.762012 sshd[5370]: pam_unix(sshd:session): session closed for user core Jun 25 19:09:08.782666 systemd[1]: Started sshd@15-172.24.4.61:22-172.24.4.1:39804.service - OpenSSH per-connection server daemon (172.24.4.1:39804). Jun 25 19:09:08.787393 systemd[1]: sshd@14-172.24.4.61:22-172.24.4.1:39798.service: Deactivated successfully. Jun 25 19:09:08.797479 systemd[1]: session-17.scope: Deactivated successfully. Jun 25 19:09:08.799836 systemd-logind[1434]: Session 17 logged out. Waiting for processes to exit. Jun 25 19:09:08.802628 systemd-logind[1434]: Removed session 17. Jun 25 19:09:09.078149 systemd-networkd[1361]: calibe01f9f359f: Gained IPv6LL Jun 25 19:09:10.395171 sshd[5471]: Accepted publickey for core from 172.24.4.1 port 39804 ssh2: RSA SHA256:otxWgi1QNrVHlA+DL2lID1btX/FnfujF3xA/xUdUjyI Jun 25 19:09:10.398557 sshd[5471]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 19:09:10.405000 systemd-logind[1434]: New session 18 of user core. Jun 25 19:09:10.407883 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 25 19:09:12.211751 containerd[1458]: time="2024-06-25T19:09:12.211670723Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 19:09:12.213642 containerd[1458]: time="2024-06-25T19:09:12.212832886Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=40421260" Jun 25 19:09:12.218774 containerd[1458]: time="2024-06-25T19:09:12.217392480Z" level=info msg="ImageCreate event name:\"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 19:09:12.223799 containerd[1458]: time="2024-06-25T19:09:12.222688977Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"41869036\" in 4.08782161s" Jun 25 19:09:12.223799 containerd[1458]: time="2024-06-25T19:09:12.222767444Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\"" Jun 25 19:09:12.226141 containerd[1458]: time="2024-06-25T19:09:12.225829545Z" level=info msg="CreateContainer within sandbox \"c76c8bb4aaa202699adbf1a98a55e910e03de70209cef755e0329c7500bdfe73\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 25 19:09:12.232461 containerd[1458]: time="2024-06-25T19:09:12.221574243Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 19:09:12.258173 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2980545595.mount: Deactivated successfully. Jun 25 19:09:12.265680 containerd[1458]: time="2024-06-25T19:09:12.265631917Z" level=info msg="CreateContainer within sandbox \"c76c8bb4aaa202699adbf1a98a55e910e03de70209cef755e0329c7500bdfe73\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6c3633e14f4322813a0a1457ab32cc7328aadbe974af199f1eeaec5392468462\"" Jun 25 19:09:12.267247 containerd[1458]: time="2024-06-25T19:09:12.266386134Z" level=info msg="StartContainer for \"6c3633e14f4322813a0a1457ab32cc7328aadbe974af199f1eeaec5392468462\"" Jun 25 19:09:12.342865 systemd[1]: Started cri-containerd-6c3633e14f4322813a0a1457ab32cc7328aadbe974af199f1eeaec5392468462.scope - libcontainer container 6c3633e14f4322813a0a1457ab32cc7328aadbe974af199f1eeaec5392468462. Jun 25 19:09:12.422180 containerd[1458]: time="2024-06-25T19:09:12.422146625Z" level=info msg="StartContainer for \"6c3633e14f4322813a0a1457ab32cc7328aadbe974af199f1eeaec5392468462\" returns successfully" Jun 25 19:09:13.194238 sshd[5471]: pam_unix(sshd:session): session closed for user core Jun 25 19:09:13.205367 systemd[1]: sshd@15-172.24.4.61:22-172.24.4.1:39804.service: Deactivated successfully. Jun 25 19:09:13.208708 systemd[1]: session-18.scope: Deactivated successfully. Jun 25 19:09:13.211529 systemd-logind[1434]: Session 18 logged out. Waiting for processes to exit. Jun 25 19:09:13.219434 systemd[1]: Started sshd@16-172.24.4.61:22-172.24.4.1:39808.service - OpenSSH per-connection server daemon (172.24.4.1:39808). Jun 25 19:09:13.224207 systemd-logind[1434]: Removed session 18. Jun 25 19:09:13.664830 kubelet[2569]: I0625 19:09:13.664661 2569 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6dd65484c7-74dt5" podStartSLOduration=2.574421639 podCreationTimestamp="2024-06-25 19:09:07 +0000 UTC" firstStartedPulling="2024-06-25 19:09:08.133911913 +0000 UTC m=+110.181821044" lastFinishedPulling="2024-06-25 19:09:12.224076723 +0000 UTC m=+114.271985854" observedRunningTime="2024-06-25 19:09:13.122182175 +0000 UTC m=+115.170091296" watchObservedRunningTime="2024-06-25 19:09:13.664586449 +0000 UTC m=+115.712495570" Jun 25 19:09:14.630054 sshd[5557]: Accepted publickey for core from 172.24.4.1 port 39808 ssh2: RSA SHA256:otxWgi1QNrVHlA+DL2lID1btX/FnfujF3xA/xUdUjyI Jun 25 19:09:14.645947 sshd[5557]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 19:09:14.665825 systemd-logind[1434]: New session 19 of user core. Jun 25 19:09:14.671212 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 25 19:09:15.578188 sshd[5557]: pam_unix(sshd:session): session closed for user core Jun 25 19:09:15.584722 systemd[1]: sshd@16-172.24.4.61:22-172.24.4.1:39808.service: Deactivated successfully. Jun 25 19:09:15.587103 systemd[1]: session-19.scope: Deactivated successfully. Jun 25 19:09:15.588258 systemd-logind[1434]: Session 19 logged out. Waiting for processes to exit. Jun 25 19:09:15.589669 systemd-logind[1434]: Removed session 19. Jun 25 19:09:20.599360 systemd[1]: Started sshd@17-172.24.4.61:22-172.24.4.1:50610.service - OpenSSH per-connection server daemon (172.24.4.1:50610). Jun 25 19:09:21.846100 sshd[5586]: Accepted publickey for core from 172.24.4.1 port 50610 ssh2: RSA SHA256:otxWgi1QNrVHlA+DL2lID1btX/FnfujF3xA/xUdUjyI Jun 25 19:09:21.847583 sshd[5586]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 19:09:21.852420 systemd-logind[1434]: New session 20 of user core. Jun 25 19:09:21.855869 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 25 19:09:23.037554 sshd[5586]: pam_unix(sshd:session): session closed for user core Jun 25 19:09:23.046497 systemd[1]: sshd@17-172.24.4.61:22-172.24.4.1:50610.service: Deactivated successfully. Jun 25 19:09:23.050642 systemd[1]: session-20.scope: Deactivated successfully. Jun 25 19:09:23.053125 systemd-logind[1434]: Session 20 logged out. Waiting for processes to exit. Jun 25 19:09:23.055432 systemd-logind[1434]: Removed session 20. Jun 25 19:09:28.066304 systemd[1]: Started sshd@18-172.24.4.61:22-172.24.4.1:43956.service - OpenSSH per-connection server daemon (172.24.4.1:43956). Jun 25 19:09:29.148847 sshd[5647]: Accepted publickey for core from 172.24.4.1 port 43956 ssh2: RSA SHA256:otxWgi1QNrVHlA+DL2lID1btX/FnfujF3xA/xUdUjyI Jun 25 19:09:29.151695 sshd[5647]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 19:09:29.164302 systemd-logind[1434]: New session 21 of user core. Jun 25 19:09:29.170072 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 25 19:09:30.133714 sshd[5647]: pam_unix(sshd:session): session closed for user core Jun 25 19:09:30.142191 systemd[1]: sshd@18-172.24.4.61:22-172.24.4.1:43956.service: Deactivated successfully. Jun 25 19:09:30.146431 systemd[1]: session-21.scope: Deactivated successfully. Jun 25 19:09:30.148718 systemd-logind[1434]: Session 21 logged out. Waiting for processes to exit. Jun 25 19:09:30.150537 systemd-logind[1434]: Removed session 21. Jun 25 19:09:35.154313 systemd[1]: Started sshd@19-172.24.4.61:22-172.24.4.1:43400.service - OpenSSH per-connection server daemon (172.24.4.1:43400). Jun 25 19:09:36.295775 sshd[5669]: Accepted publickey for core from 172.24.4.1 port 43400 ssh2: RSA SHA256:otxWgi1QNrVHlA+DL2lID1btX/FnfujF3xA/xUdUjyI Jun 25 19:09:36.303000 sshd[5669]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 19:09:36.317097 systemd-logind[1434]: New session 22 of user core. Jun 25 19:09:36.326085 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 25 19:09:37.181020 sshd[5669]: pam_unix(sshd:session): session closed for user core Jun 25 19:09:37.186854 systemd[1]: sshd@19-172.24.4.61:22-172.24.4.1:43400.service: Deactivated successfully. Jun 25 19:09:37.190292 systemd[1]: session-22.scope: Deactivated successfully. Jun 25 19:09:37.191867 systemd-logind[1434]: Session 22 logged out. Waiting for processes to exit. Jun 25 19:09:37.193633 systemd-logind[1434]: Removed session 22. Jun 25 19:09:42.207641 systemd[1]: Started sshd@20-172.24.4.61:22-172.24.4.1:43406.service - OpenSSH per-connection server daemon (172.24.4.1:43406). Jun 25 19:09:43.418116 sshd[5709]: Accepted publickey for core from 172.24.4.1 port 43406 ssh2: RSA SHA256:otxWgi1QNrVHlA+DL2lID1btX/FnfujF3xA/xUdUjyI Jun 25 19:09:43.423235 sshd[5709]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 19:09:43.435804 systemd-logind[1434]: New session 23 of user core. Jun 25 19:09:43.440089 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 25 19:09:44.390610 sshd[5709]: pam_unix(sshd:session): session closed for user core Jun 25 19:09:44.394671 systemd[1]: sshd@20-172.24.4.61:22-172.24.4.1:43406.service: Deactivated successfully. Jun 25 19:09:44.398871 systemd[1]: session-23.scope: Deactivated successfully. Jun 25 19:09:44.402173 systemd-logind[1434]: Session 23 logged out. Waiting for processes to exit. Jun 25 19:09:44.403612 systemd-logind[1434]: Removed session 23.