Oct 9 07:14:48.014308 kernel: Linux version 6.6.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Tue Oct 8 18:19:34 -00 2024 Oct 9 07:14:48.014333 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=1839da262570fb938be558d95db7fc3d986a0d71e1b77d40d35a3e2a1bac7dcd Oct 9 07:14:48.014346 kernel: BIOS-provided physical RAM map: Oct 9 07:14:48.014354 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Oct 9 07:14:48.014361 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Oct 9 07:14:48.014369 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Oct 9 07:14:48.014378 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Oct 9 07:14:48.014386 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Oct 9 07:14:48.014393 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 9 07:14:48.014403 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Oct 9 07:14:48.014411 kernel: NX (Execute Disable) protection: active Oct 9 07:14:48.014418 kernel: APIC: Static calls initialized Oct 9 07:14:48.014426 kernel: SMBIOS 2.8 present. Oct 9 07:14:48.014434 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014 Oct 9 07:14:48.014443 kernel: Hypervisor detected: KVM Oct 9 07:14:48.014453 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 9 07:14:48.014462 kernel: kvm-clock: using sched offset of 4763301779 cycles Oct 9 07:14:48.014471 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 9 07:14:48.014479 kernel: tsc: Detected 1996.249 MHz processor Oct 9 07:14:48.014488 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 9 07:14:48.014497 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 9 07:14:48.014505 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Oct 9 07:14:48.014514 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Oct 9 07:14:48.014523 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 9 07:14:48.014533 kernel: ACPI: Early table checksum verification disabled Oct 9 07:14:48.014541 kernel: ACPI: RSDP 0x00000000000F5930 000014 (v00 BOCHS ) Oct 9 07:14:48.014549 kernel: ACPI: RSDT 0x000000007FFE1848 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 07:14:48.014558 kernel: ACPI: FACP 0x000000007FFE172C 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 07:14:48.014566 kernel: ACPI: DSDT 0x000000007FFE0040 0016EC (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 07:14:48.014574 kernel: ACPI: FACS 0x000000007FFE0000 000040 Oct 9 07:14:48.014583 kernel: ACPI: APIC 0x000000007FFE17A0 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 07:14:48.014591 kernel: ACPI: WAET 0x000000007FFE1820 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 07:14:48.014599 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe172c-0x7ffe179f] Oct 9 07:14:48.014611 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe172b] Oct 9 07:14:48.014619 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Oct 9 07:14:48.014627 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17a0-0x7ffe181f] Oct 9 07:14:48.014635 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe1820-0x7ffe1847] Oct 9 07:14:48.014643 kernel: No NUMA configuration found Oct 9 07:14:48.014651 kernel: Faking a node at [mem 0x0000000000000000-0x000000007ffdcfff] Oct 9 07:14:48.014660 kernel: NODE_DATA(0) allocated [mem 0x7ffd7000-0x7ffdcfff] Oct 9 07:14:48.014672 kernel: Zone ranges: Oct 9 07:14:48.014682 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 9 07:14:48.014691 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdcfff] Oct 9 07:14:48.014699 kernel: Normal empty Oct 9 07:14:48.014708 kernel: Movable zone start for each node Oct 9 07:14:48.014716 kernel: Early memory node ranges Oct 9 07:14:48.014725 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Oct 9 07:14:48.014735 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Oct 9 07:14:48.014744 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdcfff] Oct 9 07:14:48.014753 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 9 07:14:48.014763 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Oct 9 07:14:48.014771 kernel: On node 0, zone DMA32: 35 pages in unavailable ranges Oct 9 07:14:48.014779 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 9 07:14:48.014787 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 9 07:14:48.014795 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 9 07:14:48.014803 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 9 07:14:48.014811 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 9 07:14:48.014821 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 9 07:14:48.014829 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 9 07:14:48.014837 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 9 07:14:48.014845 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 9 07:14:48.014853 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Oct 9 07:14:48.014861 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Oct 9 07:14:48.014869 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Oct 9 07:14:48.014877 kernel: Booting paravirtualized kernel on KVM Oct 9 07:14:48.014885 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 9 07:14:48.014896 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Oct 9 07:14:48.014904 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576 Oct 9 07:14:48.014912 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152 Oct 9 07:14:48.014920 kernel: pcpu-alloc: [0] 0 1 Oct 9 07:14:48.014928 kernel: kvm-guest: PV spinlocks disabled, no host support Oct 9 07:14:48.014937 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=1839da262570fb938be558d95db7fc3d986a0d71e1b77d40d35a3e2a1bac7dcd Oct 9 07:14:48.014946 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 9 07:14:48.014954 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 9 07:14:48.014964 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Oct 9 07:14:48.014972 kernel: Fallback order for Node 0: 0 Oct 9 07:14:48.014980 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515805 Oct 9 07:14:48.014988 kernel: Policy zone: DMA32 Oct 9 07:14:48.014996 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 9 07:14:48.015005 kernel: Memory: 1965068K/2096620K available (12288K kernel code, 2304K rwdata, 22648K rodata, 49452K init, 1888K bss, 131292K reserved, 0K cma-reserved) Oct 9 07:14:48.015030 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Oct 9 07:14:48.015040 kernel: ftrace: allocating 37706 entries in 148 pages Oct 9 07:14:48.015068 kernel: ftrace: allocated 148 pages with 3 groups Oct 9 07:14:48.015076 kernel: Dynamic Preempt: voluntary Oct 9 07:14:48.015084 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 9 07:14:48.015093 kernel: rcu: RCU event tracing is enabled. Oct 9 07:14:48.015101 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Oct 9 07:14:48.015109 kernel: Trampoline variant of Tasks RCU enabled. Oct 9 07:14:48.015118 kernel: Rude variant of Tasks RCU enabled. Oct 9 07:14:48.015126 kernel: Tracing variant of Tasks RCU enabled. Oct 9 07:14:48.015133 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 9 07:14:48.015142 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Oct 9 07:14:48.015152 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Oct 9 07:14:48.015160 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 9 07:14:48.015168 kernel: Console: colour VGA+ 80x25 Oct 9 07:14:48.015176 kernel: printk: console [tty0] enabled Oct 9 07:14:48.015184 kernel: printk: console [ttyS0] enabled Oct 9 07:14:48.015192 kernel: ACPI: Core revision 20230628 Oct 9 07:14:48.015200 kernel: APIC: Switch to symmetric I/O mode setup Oct 9 07:14:48.015208 kernel: x2apic enabled Oct 9 07:14:48.015216 kernel: APIC: Switched APIC routing to: physical x2apic Oct 9 07:14:48.015226 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 9 07:14:48.015234 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Oct 9 07:14:48.015242 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Oct 9 07:14:48.015251 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Oct 9 07:14:48.015259 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Oct 9 07:14:48.015267 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 9 07:14:48.015275 kernel: Spectre V2 : Mitigation: Retpolines Oct 9 07:14:48.015283 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Oct 9 07:14:48.015291 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Oct 9 07:14:48.015301 kernel: Speculative Store Bypass: Vulnerable Oct 9 07:14:48.015309 kernel: x86/fpu: x87 FPU will use FXSAVE Oct 9 07:14:48.015317 kernel: Freeing SMP alternatives memory: 32K Oct 9 07:14:48.015325 kernel: pid_max: default: 32768 minimum: 301 Oct 9 07:14:48.015333 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Oct 9 07:14:48.015341 kernel: SELinux: Initializing. Oct 9 07:14:48.015349 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 9 07:14:48.015358 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 9 07:14:48.015373 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Oct 9 07:14:48.015382 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 9 07:14:48.015391 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 9 07:14:48.015399 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 9 07:14:48.015410 kernel: Performance Events: AMD PMU driver. Oct 9 07:14:48.015418 kernel: ... version: 0 Oct 9 07:14:48.015426 kernel: ... bit width: 48 Oct 9 07:14:48.015435 kernel: ... generic registers: 4 Oct 9 07:14:48.015444 kernel: ... value mask: 0000ffffffffffff Oct 9 07:14:48.015454 kernel: ... max period: 00007fffffffffff Oct 9 07:14:48.015463 kernel: ... fixed-purpose events: 0 Oct 9 07:14:48.015471 kernel: ... event mask: 000000000000000f Oct 9 07:14:48.015480 kernel: signal: max sigframe size: 1440 Oct 9 07:14:48.015488 kernel: rcu: Hierarchical SRCU implementation. Oct 9 07:14:48.015497 kernel: rcu: Max phase no-delay instances is 400. Oct 9 07:14:48.015505 kernel: smp: Bringing up secondary CPUs ... Oct 9 07:14:48.015513 kernel: smpboot: x86: Booting SMP configuration: Oct 9 07:14:48.015522 kernel: .... node #0, CPUs: #1 Oct 9 07:14:48.015533 kernel: smp: Brought up 1 node, 2 CPUs Oct 9 07:14:48.015541 kernel: smpboot: Max logical packages: 2 Oct 9 07:14:48.015550 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Oct 9 07:14:48.015558 kernel: devtmpfs: initialized Oct 9 07:14:48.015566 kernel: x86/mm: Memory block size: 128MB Oct 9 07:14:48.015575 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 9 07:14:48.015584 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Oct 9 07:14:48.015592 kernel: pinctrl core: initialized pinctrl subsystem Oct 9 07:14:48.015601 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 9 07:14:48.015611 kernel: audit: initializing netlink subsys (disabled) Oct 9 07:14:48.015619 kernel: audit: type=2000 audit(1728458086.720:1): state=initialized audit_enabled=0 res=1 Oct 9 07:14:48.015628 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 9 07:14:48.015636 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 9 07:14:48.015644 kernel: cpuidle: using governor menu Oct 9 07:14:48.015653 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 9 07:14:48.015661 kernel: dca service started, version 1.12.1 Oct 9 07:14:48.015670 kernel: PCI: Using configuration type 1 for base access Oct 9 07:14:48.015679 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 9 07:14:48.015689 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 9 07:14:48.015697 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 9 07:14:48.015706 kernel: ACPI: Added _OSI(Module Device) Oct 9 07:14:48.015714 kernel: ACPI: Added _OSI(Processor Device) Oct 9 07:14:48.015723 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 9 07:14:48.015731 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 9 07:14:48.015739 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 9 07:14:48.015748 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Oct 9 07:14:48.015756 kernel: ACPI: Interpreter enabled Oct 9 07:14:48.015766 kernel: ACPI: PM: (supports S0 S3 S5) Oct 9 07:14:48.015775 kernel: ACPI: Using IOAPIC for interrupt routing Oct 9 07:14:48.015784 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 9 07:14:48.015792 kernel: PCI: Using E820 reservations for host bridge windows Oct 9 07:14:48.015800 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Oct 9 07:14:48.015809 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 9 07:14:48.015934 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Oct 9 07:14:48.016056 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Oct 9 07:14:48.016153 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Oct 9 07:14:48.016166 kernel: acpiphp: Slot [3] registered Oct 9 07:14:48.016175 kernel: acpiphp: Slot [4] registered Oct 9 07:14:48.016184 kernel: acpiphp: Slot [5] registered Oct 9 07:14:48.016192 kernel: acpiphp: Slot [6] registered Oct 9 07:14:48.016200 kernel: acpiphp: Slot [7] registered Oct 9 07:14:48.016209 kernel: acpiphp: Slot [8] registered Oct 9 07:14:48.016217 kernel: acpiphp: Slot [9] registered Oct 9 07:14:48.016228 kernel: acpiphp: Slot [10] registered Oct 9 07:14:48.016237 kernel: acpiphp: Slot [11] registered Oct 9 07:14:48.016245 kernel: acpiphp: Slot [12] registered Oct 9 07:14:48.016254 kernel: acpiphp: Slot [13] registered Oct 9 07:14:48.016262 kernel: acpiphp: Slot [14] registered Oct 9 07:14:48.016270 kernel: acpiphp: Slot [15] registered Oct 9 07:14:48.016279 kernel: acpiphp: Slot [16] registered Oct 9 07:14:48.016287 kernel: acpiphp: Slot [17] registered Oct 9 07:14:48.016296 kernel: acpiphp: Slot [18] registered Oct 9 07:14:48.016304 kernel: acpiphp: Slot [19] registered Oct 9 07:14:48.016314 kernel: acpiphp: Slot [20] registered Oct 9 07:14:48.016322 kernel: acpiphp: Slot [21] registered Oct 9 07:14:48.016331 kernel: acpiphp: Slot [22] registered Oct 9 07:14:48.016339 kernel: acpiphp: Slot [23] registered Oct 9 07:14:48.016347 kernel: acpiphp: Slot [24] registered Oct 9 07:14:48.016355 kernel: acpiphp: Slot [25] registered Oct 9 07:14:48.016364 kernel: acpiphp: Slot [26] registered Oct 9 07:14:48.016372 kernel: acpiphp: Slot [27] registered Oct 9 07:14:48.016380 kernel: acpiphp: Slot [28] registered Oct 9 07:14:48.016389 kernel: acpiphp: Slot [29] registered Oct 9 07:14:48.016399 kernel: acpiphp: Slot [30] registered Oct 9 07:14:48.016408 kernel: acpiphp: Slot [31] registered Oct 9 07:14:48.016416 kernel: PCI host bridge to bus 0000:00 Oct 9 07:14:48.016517 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 9 07:14:48.016605 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 9 07:14:48.016691 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 9 07:14:48.016776 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Oct 9 07:14:48.016864 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Oct 9 07:14:48.016948 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 9 07:14:48.017082 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Oct 9 07:14:48.017187 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Oct 9 07:14:48.017292 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Oct 9 07:14:48.017388 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Oct 9 07:14:48.017490 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Oct 9 07:14:48.017596 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Oct 9 07:14:48.017693 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Oct 9 07:14:48.017788 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Oct 9 07:14:48.017892 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Oct 9 07:14:48.017988 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Oct 9 07:14:48.018487 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Oct 9 07:14:48.018603 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Oct 9 07:14:48.018701 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Oct 9 07:14:48.018797 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Oct 9 07:14:48.018891 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Oct 9 07:14:48.018986 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Oct 9 07:14:48.019131 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 9 07:14:48.019237 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Oct 9 07:14:48.019336 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Oct 9 07:14:48.019428 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Oct 9 07:14:48.019522 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Oct 9 07:14:48.019615 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Oct 9 07:14:48.019721 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Oct 9 07:14:48.019817 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Oct 9 07:14:48.019940 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Oct 9 07:14:48.020103 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Oct 9 07:14:48.020209 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Oct 9 07:14:48.020303 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Oct 9 07:14:48.020395 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Oct 9 07:14:48.020496 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Oct 9 07:14:48.020591 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Oct 9 07:14:48.020693 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Oct 9 07:14:48.020713 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 9 07:14:48.020722 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 9 07:14:48.020732 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 9 07:14:48.020741 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 9 07:14:48.020751 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Oct 9 07:14:48.020760 kernel: iommu: Default domain type: Translated Oct 9 07:14:48.020769 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 9 07:14:48.020778 kernel: PCI: Using ACPI for IRQ routing Oct 9 07:14:48.020790 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 9 07:14:48.020800 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Oct 9 07:14:48.020809 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Oct 9 07:14:48.020903 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Oct 9 07:14:48.020996 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Oct 9 07:14:48.021134 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 9 07:14:48.021149 kernel: vgaarb: loaded Oct 9 07:14:48.021159 kernel: clocksource: Switched to clocksource kvm-clock Oct 9 07:14:48.021168 kernel: VFS: Disk quotas dquot_6.6.0 Oct 9 07:14:48.021182 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 9 07:14:48.021192 kernel: pnp: PnP ACPI init Oct 9 07:14:48.021301 kernel: pnp 00:03: [dma 2] Oct 9 07:14:48.021318 kernel: pnp: PnP ACPI: found 5 devices Oct 9 07:14:48.021327 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 9 07:14:48.021336 kernel: NET: Registered PF_INET protocol family Oct 9 07:14:48.021347 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 9 07:14:48.021356 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Oct 9 07:14:48.021365 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 9 07:14:48.021378 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Oct 9 07:14:48.021387 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Oct 9 07:14:48.021397 kernel: TCP: Hash tables configured (established 16384 bind 16384) Oct 9 07:14:48.021406 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 9 07:14:48.021415 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 9 07:14:48.021424 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 9 07:14:48.021434 kernel: NET: Registered PF_XDP protocol family Oct 9 07:14:48.021519 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 9 07:14:48.021622 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 9 07:14:48.021705 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 9 07:14:48.021786 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Oct 9 07:14:48.021871 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Oct 9 07:14:48.021985 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Oct 9 07:14:48.022128 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Oct 9 07:14:48.022144 kernel: PCI: CLS 0 bytes, default 64 Oct 9 07:14:48.022154 kernel: Initialise system trusted keyrings Oct 9 07:14:48.022168 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Oct 9 07:14:48.022177 kernel: Key type asymmetric registered Oct 9 07:14:48.022186 kernel: Asymmetric key parser 'x509' registered Oct 9 07:14:48.022208 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Oct 9 07:14:48.022217 kernel: io scheduler mq-deadline registered Oct 9 07:14:48.022226 kernel: io scheduler kyber registered Oct 9 07:14:48.022236 kernel: io scheduler bfq registered Oct 9 07:14:48.022245 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 9 07:14:48.022254 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Oct 9 07:14:48.022267 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Oct 9 07:14:48.022276 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Oct 9 07:14:48.022286 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Oct 9 07:14:48.022295 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 9 07:14:48.022305 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 9 07:14:48.022314 kernel: random: crng init done Oct 9 07:14:48.022323 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 9 07:14:48.022334 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 9 07:14:48.022343 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 9 07:14:48.022452 kernel: rtc_cmos 00:04: RTC can wake from S4 Oct 9 07:14:48.022468 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 9 07:14:48.022551 kernel: rtc_cmos 00:04: registered as rtc0 Oct 9 07:14:48.022636 kernel: rtc_cmos 00:04: setting system clock to 2024-10-09T07:14:47 UTC (1728458087) Oct 9 07:14:48.022719 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Oct 9 07:14:48.022733 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Oct 9 07:14:48.022742 kernel: NET: Registered PF_INET6 protocol family Oct 9 07:14:48.022752 kernel: Segment Routing with IPv6 Oct 9 07:14:48.022765 kernel: In-situ OAM (IOAM) with IPv6 Oct 9 07:14:48.022774 kernel: NET: Registered PF_PACKET protocol family Oct 9 07:14:48.022783 kernel: Key type dns_resolver registered Oct 9 07:14:48.022792 kernel: IPI shorthand broadcast: enabled Oct 9 07:14:48.022801 kernel: sched_clock: Marking stable (941008995, 125328672)->(1070848756, -4511089) Oct 9 07:14:48.022812 kernel: registered taskstats version 1 Oct 9 07:14:48.022821 kernel: Loading compiled-in X.509 certificates Oct 9 07:14:48.022830 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.54-flatcar: 0b7ba59a46acf969bcd97270f441857501641c76' Oct 9 07:14:48.022839 kernel: Key type .fscrypt registered Oct 9 07:14:48.022850 kernel: Key type fscrypt-provisioning registered Oct 9 07:14:48.022859 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 9 07:14:48.022868 kernel: ima: Allocated hash algorithm: sha1 Oct 9 07:14:48.022878 kernel: ima: No architecture policies found Oct 9 07:14:48.022887 kernel: clk: Disabling unused clocks Oct 9 07:14:48.022896 kernel: Freeing unused kernel image (initmem) memory: 49452K Oct 9 07:14:48.022905 kernel: Write protecting the kernel read-only data: 36864k Oct 9 07:14:48.022915 kernel: Freeing unused kernel image (rodata/data gap) memory: 1928K Oct 9 07:14:48.022924 kernel: Run /init as init process Oct 9 07:14:48.022935 kernel: with arguments: Oct 9 07:14:48.022944 kernel: /init Oct 9 07:14:48.022953 kernel: with environment: Oct 9 07:14:48.022962 kernel: HOME=/ Oct 9 07:14:48.022971 kernel: TERM=linux Oct 9 07:14:48.022980 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 9 07:14:48.022992 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 9 07:14:48.023006 systemd[1]: Detected virtualization kvm. Oct 9 07:14:48.023033 systemd[1]: Detected architecture x86-64. Oct 9 07:14:48.023043 systemd[1]: Running in initrd. Oct 9 07:14:48.023053 systemd[1]: No hostname configured, using default hostname. Oct 9 07:14:48.023063 systemd[1]: Hostname set to . Oct 9 07:14:48.023074 systemd[1]: Initializing machine ID from VM UUID. Oct 9 07:14:48.023083 systemd[1]: Queued start job for default target initrd.target. Oct 9 07:14:48.023093 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 07:14:48.023106 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 07:14:48.023117 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 9 07:14:48.023128 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 9 07:14:48.023138 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 9 07:14:48.023148 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 9 07:14:48.023159 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 9 07:14:48.023169 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 9 07:14:48.023181 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 07:14:48.023191 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 9 07:14:48.023201 systemd[1]: Reached target paths.target - Path Units. Oct 9 07:14:48.023211 systemd[1]: Reached target slices.target - Slice Units. Oct 9 07:14:48.023230 systemd[1]: Reached target swap.target - Swaps. Oct 9 07:14:48.023242 systemd[1]: Reached target timers.target - Timer Units. Oct 9 07:14:48.023254 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 9 07:14:48.023265 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 9 07:14:48.023275 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 9 07:14:48.023285 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 9 07:14:48.023295 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 9 07:14:48.023306 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 9 07:14:48.023317 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 07:14:48.023327 systemd[1]: Reached target sockets.target - Socket Units. Oct 9 07:14:48.023337 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 9 07:14:48.023350 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 9 07:14:48.023360 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 9 07:14:48.023371 systemd[1]: Starting systemd-fsck-usr.service... Oct 9 07:14:48.023381 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 9 07:14:48.023391 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 9 07:14:48.023401 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 07:14:48.023430 systemd-journald[185]: Collecting audit messages is disabled. Oct 9 07:14:48.023456 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 9 07:14:48.023468 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 07:14:48.023479 systemd-journald[185]: Journal started Oct 9 07:14:48.023504 systemd-journald[185]: Runtime Journal (/run/log/journal/6f23bd769364494fa8bfbddef332fbbd) is 4.9M, max 39.3M, 34.4M free. Oct 9 07:14:48.027064 systemd[1]: Started systemd-journald.service - Journal Service. Oct 9 07:14:48.027382 systemd[1]: Finished systemd-fsck-usr.service. Oct 9 07:14:48.034580 systemd-modules-load[186]: Inserted module 'overlay' Oct 9 07:14:48.037454 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 9 07:14:48.087754 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 9 07:14:48.087780 kernel: Bridge firewalling registered Oct 9 07:14:48.072248 systemd-modules-load[186]: Inserted module 'br_netfilter' Oct 9 07:14:48.092169 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Oct 9 07:14:48.093869 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 9 07:14:48.094994 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 07:14:48.098601 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 9 07:14:48.107190 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 07:14:48.111159 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 9 07:14:48.114797 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 9 07:14:48.118083 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Oct 9 07:14:48.124235 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 07:14:48.126072 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 9 07:14:48.131201 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 9 07:14:48.133704 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 9 07:14:48.134481 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 07:14:48.147224 dracut-cmdline[218]: dracut-dracut-053 Oct 9 07:14:48.149689 dracut-cmdline[218]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=1839da262570fb938be558d95db7fc3d986a0d71e1b77d40d35a3e2a1bac7dcd Oct 9 07:14:48.167268 systemd-resolved[219]: Positive Trust Anchors: Oct 9 07:14:48.167283 systemd-resolved[219]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 9 07:14:48.167326 systemd-resolved[219]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Oct 9 07:14:48.174711 systemd-resolved[219]: Defaulting to hostname 'linux'. Oct 9 07:14:48.175702 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 9 07:14:48.176373 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 9 07:14:48.234059 kernel: SCSI subsystem initialized Oct 9 07:14:48.247087 kernel: Loading iSCSI transport class v2.0-870. Oct 9 07:14:48.261063 kernel: iscsi: registered transport (tcp) Oct 9 07:14:48.289193 kernel: iscsi: registered transport (qla4xxx) Oct 9 07:14:48.289287 kernel: QLogic iSCSI HBA Driver Oct 9 07:14:48.324245 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 9 07:14:48.330147 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 9 07:14:48.358417 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 9 07:14:48.358514 kernel: device-mapper: uevent: version 1.0.3 Oct 9 07:14:48.359573 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Oct 9 07:14:48.409104 kernel: raid6: sse2x4 gen() 12457 MB/s Oct 9 07:14:48.428080 kernel: raid6: sse2x2 gen() 5985 MB/s Oct 9 07:14:48.445232 kernel: raid6: sse2x1 gen() 6474 MB/s Oct 9 07:14:48.445293 kernel: raid6: using algorithm sse2x4 gen() 12457 MB/s Oct 9 07:14:48.463420 kernel: raid6: .... xor() 7184 MB/s, rmw enabled Oct 9 07:14:48.463490 kernel: raid6: using ssse3x2 recovery algorithm Oct 9 07:14:48.491421 kernel: xor: measuring software checksum speed Oct 9 07:14:48.491629 kernel: prefetch64-sse : 17125 MB/sec Oct 9 07:14:48.492071 kernel: generic_sse : 14477 MB/sec Oct 9 07:14:48.493288 kernel: xor: using function: prefetch64-sse (17125 MB/sec) Oct 9 07:14:48.705086 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 9 07:14:48.721837 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 9 07:14:48.732389 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 07:14:48.747974 systemd-udevd[403]: Using default interface naming scheme 'v255'. Oct 9 07:14:48.752401 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 07:14:48.762296 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 9 07:14:48.782332 dracut-pre-trigger[416]: rd.md=0: removing MD RAID activation Oct 9 07:14:48.826195 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 9 07:14:48.835315 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 9 07:14:48.876983 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 07:14:48.884331 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 9 07:14:48.914699 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 9 07:14:48.917308 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 9 07:14:48.920335 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 07:14:48.922917 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 9 07:14:48.932163 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 9 07:14:48.947946 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 9 07:14:48.964039 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues Oct 9 07:14:48.975034 kernel: virtio_blk virtio2: [vda] 41943040 512-byte logical blocks (21.5 GB/20.0 GiB) Oct 9 07:14:48.984163 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 9 07:14:48.984181 kernel: GPT:17805311 != 41943039 Oct 9 07:14:48.984193 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 9 07:14:48.984205 kernel: GPT:17805311 != 41943039 Oct 9 07:14:48.984216 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 9 07:14:48.984228 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 07:14:48.996378 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 9 07:14:48.996585 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 07:14:48.997414 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 07:14:48.997975 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 07:14:48.998131 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 07:14:48.999968 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 07:14:49.017067 kernel: libata version 3.00 loaded. Oct 9 07:14:49.017481 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 07:14:49.025089 kernel: ata_piix 0000:00:01.1: version 2.13 Oct 9 07:14:49.026719 kernel: scsi host0: ata_piix Oct 9 07:14:49.026907 kernel: scsi host1: ata_piix Oct 9 07:14:49.027855 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Oct 9 07:14:49.030742 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Oct 9 07:14:49.053032 kernel: BTRFS: device fsid a442e753-4749-4732-ba27-ea845965fe4a devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (461) Oct 9 07:14:49.057404 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 9 07:14:49.093003 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (450) Oct 9 07:14:49.096849 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 07:14:49.111678 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 9 07:14:49.116295 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 9 07:14:49.116846 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Oct 9 07:14:49.123398 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 9 07:14:49.135157 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 9 07:14:49.137510 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 07:14:49.152930 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 07:14:49.158291 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 07:14:49.158332 disk-uuid[499]: Primary Header is updated. Oct 9 07:14:49.158332 disk-uuid[499]: Secondary Entries is updated. Oct 9 07:14:49.158332 disk-uuid[499]: Secondary Header is updated. Oct 9 07:14:50.180432 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 07:14:50.182807 disk-uuid[509]: The operation has completed successfully. Oct 9 07:14:50.253601 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 9 07:14:50.253854 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 9 07:14:50.280162 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 9 07:14:50.296617 sh[523]: Success Oct 9 07:14:50.320041 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Oct 9 07:14:50.384121 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 9 07:14:50.401198 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 9 07:14:50.403565 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 9 07:14:50.434695 kernel: BTRFS info (device dm-0): first mount of filesystem a442e753-4749-4732-ba27-ea845965fe4a Oct 9 07:14:50.434790 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 9 07:14:50.438263 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Oct 9 07:14:50.450593 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 9 07:14:50.453307 kernel: BTRFS info (device dm-0): using free space tree Oct 9 07:14:50.470686 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 9 07:14:50.472943 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 9 07:14:50.479301 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 9 07:14:50.495655 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 9 07:14:50.523712 kernel: BTRFS info (device vda6): first mount of filesystem aa256cb8-f25c-41d0-8582-dc8cedfde7ce Oct 9 07:14:50.523803 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 9 07:14:50.527923 kernel: BTRFS info (device vda6): using free space tree Oct 9 07:14:50.539105 kernel: BTRFS info (device vda6): auto enabling async discard Oct 9 07:14:50.563769 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 9 07:14:50.568285 kernel: BTRFS info (device vda6): last unmount of filesystem aa256cb8-f25c-41d0-8582-dc8cedfde7ce Oct 9 07:14:50.583874 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 9 07:14:50.594394 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 9 07:14:50.657216 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 9 07:14:50.666430 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 9 07:14:50.687962 systemd-networkd[705]: lo: Link UP Oct 9 07:14:50.687974 systemd-networkd[705]: lo: Gained carrier Oct 9 07:14:50.689223 systemd-networkd[705]: Enumeration completed Oct 9 07:14:50.689592 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 9 07:14:50.690404 systemd-networkd[705]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 07:14:50.690408 systemd-networkd[705]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 9 07:14:50.691310 systemd[1]: Reached target network.target - Network. Oct 9 07:14:50.692233 systemd-networkd[705]: eth0: Link UP Oct 9 07:14:50.692237 systemd-networkd[705]: eth0: Gained carrier Oct 9 07:14:50.692244 systemd-networkd[705]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 07:14:50.710077 systemd-networkd[705]: eth0: DHCPv4 address 172.24.4.220/24, gateway 172.24.4.1 acquired from 172.24.4.1 Oct 9 07:14:50.753351 ignition[632]: Ignition 2.18.0 Oct 9 07:14:50.753366 ignition[632]: Stage: fetch-offline Oct 9 07:14:50.753413 ignition[632]: no configs at "/usr/lib/ignition/base.d" Oct 9 07:14:50.755138 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 9 07:14:50.753424 ignition[632]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 9 07:14:50.753603 ignition[632]: parsed url from cmdline: "" Oct 9 07:14:50.753607 ignition[632]: no config URL provided Oct 9 07:14:50.753613 ignition[632]: reading system config file "/usr/lib/ignition/user.ign" Oct 9 07:14:50.753625 ignition[632]: no config at "/usr/lib/ignition/user.ign" Oct 9 07:14:50.753630 ignition[632]: failed to fetch config: resource requires networking Oct 9 07:14:50.753838 ignition[632]: Ignition finished successfully Oct 9 07:14:50.766219 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Oct 9 07:14:50.780008 ignition[714]: Ignition 2.18.0 Oct 9 07:14:50.780043 ignition[714]: Stage: fetch Oct 9 07:14:50.780235 ignition[714]: no configs at "/usr/lib/ignition/base.d" Oct 9 07:14:50.780248 ignition[714]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 9 07:14:50.780353 ignition[714]: parsed url from cmdline: "" Oct 9 07:14:50.780357 ignition[714]: no config URL provided Oct 9 07:14:50.780362 ignition[714]: reading system config file "/usr/lib/ignition/user.ign" Oct 9 07:14:50.780371 ignition[714]: no config at "/usr/lib/ignition/user.ign" Oct 9 07:14:50.780493 ignition[714]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Oct 9 07:14:50.780602 ignition[714]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Oct 9 07:14:50.780640 ignition[714]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Oct 9 07:14:50.989419 ignition[714]: GET result: OK Oct 9 07:14:50.990360 ignition[714]: parsing config with SHA512: 018f902b502c04f57d82f1e276593da493d89d7392ccc0ee4d16c34a0593a044ce0dac70cf885c991467bae1fd1e4247be0042bd787a1f12abf5c89de0ddcbe4 Oct 9 07:14:51.000588 unknown[714]: fetched base config from "system" Oct 9 07:14:51.000615 unknown[714]: fetched base config from "system" Oct 9 07:14:51.001766 ignition[714]: fetch: fetch complete Oct 9 07:14:51.000634 unknown[714]: fetched user config from "openstack" Oct 9 07:14:51.001780 ignition[714]: fetch: fetch passed Oct 9 07:14:51.006258 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Oct 9 07:14:51.001880 ignition[714]: Ignition finished successfully Oct 9 07:14:51.025388 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 9 07:14:51.054459 ignition[721]: Ignition 2.18.0 Oct 9 07:14:51.054489 ignition[721]: Stage: kargs Oct 9 07:14:51.056619 ignition[721]: no configs at "/usr/lib/ignition/base.d" Oct 9 07:14:51.056668 ignition[721]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 9 07:14:51.062173 ignition[721]: kargs: kargs passed Oct 9 07:14:51.062283 ignition[721]: Ignition finished successfully Oct 9 07:14:51.064653 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 9 07:14:51.070329 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 9 07:14:51.107229 ignition[728]: Ignition 2.18.0 Oct 9 07:14:51.107246 ignition[728]: Stage: disks Oct 9 07:14:51.107620 ignition[728]: no configs at "/usr/lib/ignition/base.d" Oct 9 07:14:51.107646 ignition[728]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 9 07:14:51.111773 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 9 07:14:51.109918 ignition[728]: disks: disks passed Oct 9 07:14:51.115236 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 9 07:14:51.110011 ignition[728]: Ignition finished successfully Oct 9 07:14:51.116956 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 9 07:14:51.119213 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 9 07:14:51.121833 systemd[1]: Reached target sysinit.target - System Initialization. Oct 9 07:14:51.124075 systemd[1]: Reached target basic.target - Basic System. Oct 9 07:14:51.139317 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 9 07:14:51.166140 systemd-resolved[219]: Detected conflict on linux IN A 172.24.4.220 Oct 9 07:14:51.166175 systemd-resolved[219]: Hostname conflict, changing published hostname from 'linux' to 'linux3'. Oct 9 07:14:51.171178 systemd-fsck[738]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Oct 9 07:14:51.180765 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 9 07:14:51.188226 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 9 07:14:51.361050 kernel: EXT4-fs (vda9): mounted filesystem ef891253-2811-499a-a9aa-02f0764c1b95 r/w with ordered data mode. Quota mode: none. Oct 9 07:14:51.362284 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 9 07:14:51.363893 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 9 07:14:51.376130 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 9 07:14:51.378562 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 9 07:14:51.380282 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 9 07:14:51.387187 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Oct 9 07:14:51.387781 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 9 07:14:51.387810 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 9 07:14:51.396738 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 9 07:14:51.406075 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (746) Oct 9 07:14:51.407236 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 9 07:14:51.439635 kernel: BTRFS info (device vda6): first mount of filesystem aa256cb8-f25c-41d0-8582-dc8cedfde7ce Oct 9 07:14:51.439706 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 9 07:14:51.439720 kernel: BTRFS info (device vda6): using free space tree Oct 9 07:14:51.454052 kernel: BTRFS info (device vda6): auto enabling async discard Oct 9 07:14:51.464999 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 9 07:14:51.543737 initrd-setup-root[775]: cut: /sysroot/etc/passwd: No such file or directory Oct 9 07:14:51.553474 initrd-setup-root[782]: cut: /sysroot/etc/group: No such file or directory Oct 9 07:14:51.561095 initrd-setup-root[789]: cut: /sysroot/etc/shadow: No such file or directory Oct 9 07:14:51.575517 initrd-setup-root[796]: cut: /sysroot/etc/gshadow: No such file or directory Oct 9 07:14:51.659696 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 9 07:14:51.666141 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 9 07:14:51.668142 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 9 07:14:51.676797 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 9 07:14:51.679300 kernel: BTRFS info (device vda6): last unmount of filesystem aa256cb8-f25c-41d0-8582-dc8cedfde7ce Oct 9 07:14:51.704707 ignition[863]: INFO : Ignition 2.18.0 Oct 9 07:14:51.705544 ignition[863]: INFO : Stage: mount Oct 9 07:14:51.706426 ignition[863]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 07:14:51.707367 ignition[863]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 9 07:14:51.711820 ignition[863]: INFO : mount: mount passed Oct 9 07:14:51.712866 ignition[863]: INFO : Ignition finished successfully Oct 9 07:14:51.711941 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 9 07:14:51.713607 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 9 07:14:52.217892 systemd-networkd[705]: eth0: Gained IPv6LL Oct 9 07:14:58.620666 coreos-metadata[748]: Oct 09 07:14:58.620 WARN failed to locate config-drive, using the metadata service API instead Oct 9 07:14:58.662223 coreos-metadata[748]: Oct 09 07:14:58.662 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Oct 9 07:14:58.679739 coreos-metadata[748]: Oct 09 07:14:58.679 INFO Fetch successful Oct 9 07:14:58.679739 coreos-metadata[748]: Oct 09 07:14:58.679 INFO wrote hostname ci-3975-2-2-4-dcc5873578.novalocal to /sysroot/etc/hostname Oct 9 07:14:58.683650 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Oct 9 07:14:58.683985 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Oct 9 07:14:58.695234 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 9 07:14:58.734689 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 9 07:14:58.752278 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (881) Oct 9 07:14:58.759737 kernel: BTRFS info (device vda6): first mount of filesystem aa256cb8-f25c-41d0-8582-dc8cedfde7ce Oct 9 07:14:58.759819 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 9 07:14:58.762751 kernel: BTRFS info (device vda6): using free space tree Oct 9 07:14:58.772090 kernel: BTRFS info (device vda6): auto enabling async discard Oct 9 07:14:58.777475 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 9 07:14:58.817711 ignition[899]: INFO : Ignition 2.18.0 Oct 9 07:14:58.817711 ignition[899]: INFO : Stage: files Oct 9 07:14:58.817711 ignition[899]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 07:14:58.822446 ignition[899]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 9 07:14:58.822446 ignition[899]: DEBUG : files: compiled without relabeling support, skipping Oct 9 07:14:58.822446 ignition[899]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 9 07:14:58.822446 ignition[899]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 9 07:14:58.831656 ignition[899]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 9 07:14:58.831656 ignition[899]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 9 07:14:58.831656 ignition[899]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 9 07:14:58.831656 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Oct 9 07:14:58.831656 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Oct 9 07:14:58.825110 unknown[899]: wrote ssh authorized keys file for user: core Oct 9 07:14:58.903382 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 9 07:14:59.213169 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Oct 9 07:14:59.213169 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 9 07:14:59.217879 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 9 07:14:59.217879 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 9 07:14:59.217879 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 9 07:14:59.217879 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 9 07:14:59.217879 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 9 07:14:59.217879 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 9 07:14:59.217879 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 9 07:14:59.217879 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 9 07:14:59.217879 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 9 07:14:59.217879 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Oct 9 07:14:59.217879 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Oct 9 07:14:59.217879 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Oct 9 07:14:59.217879 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Oct 9 07:14:59.831606 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 9 07:15:01.706508 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Oct 9 07:15:01.706508 ignition[899]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 9 07:15:01.710100 ignition[899]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 9 07:15:01.712463 ignition[899]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 9 07:15:01.712463 ignition[899]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 9 07:15:01.712463 ignition[899]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Oct 9 07:15:01.712463 ignition[899]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Oct 9 07:15:01.712463 ignition[899]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 9 07:15:01.712463 ignition[899]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 9 07:15:01.712463 ignition[899]: INFO : files: files passed Oct 9 07:15:01.712463 ignition[899]: INFO : Ignition finished successfully Oct 9 07:15:01.711724 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 9 07:15:01.721476 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 9 07:15:01.725156 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 9 07:15:01.726470 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 9 07:15:01.726552 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 9 07:15:01.744072 initrd-setup-root-after-ignition[928]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 9 07:15:01.745279 initrd-setup-root-after-ignition[928]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 9 07:15:01.746762 initrd-setup-root-after-ignition[932]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 9 07:15:01.748647 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 9 07:15:01.749382 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 9 07:15:01.762255 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 9 07:15:01.788948 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 9 07:15:01.789194 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 9 07:15:01.791672 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 9 07:15:01.793418 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 9 07:15:01.795334 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 9 07:15:01.807320 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 9 07:15:01.822919 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 9 07:15:01.834278 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 9 07:15:01.849091 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 9 07:15:01.850427 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 07:15:01.851755 systemd[1]: Stopped target timers.target - Timer Units. Oct 9 07:15:01.852847 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 9 07:15:01.852981 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 9 07:15:01.855067 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 9 07:15:01.856871 systemd[1]: Stopped target basic.target - Basic System. Oct 9 07:15:01.858579 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 9 07:15:01.860483 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 9 07:15:01.862466 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 9 07:15:01.864539 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 9 07:15:01.866641 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 9 07:15:01.868670 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 9 07:15:01.870545 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 9 07:15:01.872285 systemd[1]: Stopped target swap.target - Swaps. Oct 9 07:15:01.873807 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 9 07:15:01.874112 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 9 07:15:01.876073 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 9 07:15:01.878007 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 07:15:01.879915 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 9 07:15:01.880259 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 07:15:01.881771 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 9 07:15:01.882049 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 9 07:15:01.884194 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 9 07:15:01.884556 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 9 07:15:01.886100 systemd[1]: ignition-files.service: Deactivated successfully. Oct 9 07:15:01.886343 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 9 07:15:01.894211 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 9 07:15:01.895305 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 9 07:15:01.895978 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 07:15:01.905287 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 9 07:15:01.905834 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 9 07:15:01.905986 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 07:15:01.906618 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 9 07:15:01.906738 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 9 07:15:01.915469 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 9 07:15:01.916415 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 9 07:15:01.917888 ignition[952]: INFO : Ignition 2.18.0 Oct 9 07:15:01.917888 ignition[952]: INFO : Stage: umount Oct 9 07:15:01.917888 ignition[952]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 07:15:01.917888 ignition[952]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 9 07:15:01.923133 ignition[952]: INFO : umount: umount passed Oct 9 07:15:01.923657 ignition[952]: INFO : Ignition finished successfully Oct 9 07:15:01.925911 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 9 07:15:01.926645 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 9 07:15:01.927549 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 9 07:15:01.927623 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 9 07:15:01.929237 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 9 07:15:01.929278 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 9 07:15:01.931189 systemd[1]: ignition-fetch.service: Deactivated successfully. Oct 9 07:15:01.931233 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Oct 9 07:15:01.932644 systemd[1]: Stopped target network.target - Network. Oct 9 07:15:01.933117 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 9 07:15:01.933163 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 9 07:15:01.934915 systemd[1]: Stopped target paths.target - Path Units. Oct 9 07:15:01.935391 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 9 07:15:01.935947 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 07:15:01.937110 systemd[1]: Stopped target slices.target - Slice Units. Oct 9 07:15:01.937567 systemd[1]: Stopped target sockets.target - Socket Units. Oct 9 07:15:01.938154 systemd[1]: iscsid.socket: Deactivated successfully. Oct 9 07:15:01.938196 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 9 07:15:01.939236 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 9 07:15:01.939268 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 9 07:15:01.940129 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 9 07:15:01.940169 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 9 07:15:01.941177 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 9 07:15:01.941217 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 9 07:15:01.942565 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 9 07:15:01.943605 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 9 07:15:01.945411 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 9 07:15:01.946039 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 9 07:15:01.946119 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 9 07:15:01.946142 systemd-networkd[705]: eth0: DHCPv6 lease lost Oct 9 07:15:01.948984 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 9 07:15:01.949088 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 9 07:15:01.950295 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 9 07:15:01.950410 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 9 07:15:01.953297 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 9 07:15:01.953724 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 9 07:15:01.954747 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 9 07:15:01.954798 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 9 07:15:01.962158 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 9 07:15:01.963199 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 9 07:15:01.963253 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 9 07:15:01.963905 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 9 07:15:01.963950 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 9 07:15:01.965124 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 9 07:15:01.965170 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 9 07:15:01.966470 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 9 07:15:01.966518 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Oct 9 07:15:01.967636 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 07:15:01.981296 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 9 07:15:01.981423 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 07:15:01.983547 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 9 07:15:01.983633 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 9 07:15:01.984919 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 9 07:15:01.984970 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 9 07:15:01.986183 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 9 07:15:01.986213 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 07:15:01.987262 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 9 07:15:01.987304 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 9 07:15:01.988759 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 9 07:15:01.988806 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 9 07:15:01.989823 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 9 07:15:01.989863 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 07:15:02.000386 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 9 07:15:02.000911 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 9 07:15:02.000963 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 07:15:02.001529 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 07:15:02.001611 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 07:15:02.005534 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 9 07:15:02.005735 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 9 07:15:02.007189 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 9 07:15:02.012137 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 9 07:15:02.018797 systemd[1]: Switching root. Oct 9 07:15:02.043433 systemd-journald[185]: Journal stopped Oct 9 07:15:04.008977 systemd-journald[185]: Received SIGTERM from PID 1 (systemd). Oct 9 07:15:04.014684 kernel: SELinux: policy capability network_peer_controls=1 Oct 9 07:15:04.014712 kernel: SELinux: policy capability open_perms=1 Oct 9 07:15:04.014739 kernel: SELinux: policy capability extended_socket_class=1 Oct 9 07:15:04.014753 kernel: SELinux: policy capability always_check_network=0 Oct 9 07:15:04.014769 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 9 07:15:04.014783 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 9 07:15:04.014797 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 9 07:15:04.014815 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 9 07:15:04.014830 kernel: audit: type=1403 audit(1728458102.926:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 9 07:15:04.014850 systemd[1]: Successfully loaded SELinux policy in 77.039ms. Oct 9 07:15:04.014871 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 29.153ms. Oct 9 07:15:04.014908 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 9 07:15:04.014936 systemd[1]: Detected virtualization kvm. Oct 9 07:15:04.014953 systemd[1]: Detected architecture x86-64. Oct 9 07:15:04.014967 systemd[1]: Detected first boot. Oct 9 07:15:04.014981 systemd[1]: Hostname set to . Oct 9 07:15:04.014997 systemd[1]: Initializing machine ID from VM UUID. Oct 9 07:15:04.015064 zram_generator::config[995]: No configuration found. Oct 9 07:15:04.015085 systemd[1]: Populated /etc with preset unit settings. Oct 9 07:15:04.015102 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 9 07:15:04.015128 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 9 07:15:04.015149 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 9 07:15:04.015178 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 9 07:15:04.015195 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 9 07:15:04.015210 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 9 07:15:04.015224 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 9 07:15:04.015239 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 9 07:15:04.015254 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 9 07:15:04.015273 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 9 07:15:04.015288 systemd[1]: Created slice user.slice - User and Session Slice. Oct 9 07:15:04.015302 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 07:15:04.015317 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 07:15:04.015332 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 9 07:15:04.015346 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 9 07:15:04.015363 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 9 07:15:04.015378 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 9 07:15:04.015392 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 9 07:15:04.015410 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 07:15:04.015425 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 9 07:15:04.015441 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 9 07:15:04.015455 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 9 07:15:04.015471 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 9 07:15:04.015485 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 07:15:04.015502 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 9 07:15:04.015517 systemd[1]: Reached target slices.target - Slice Units. Oct 9 07:15:04.015532 systemd[1]: Reached target swap.target - Swaps. Oct 9 07:15:04.015547 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 9 07:15:04.015561 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 9 07:15:04.015576 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 9 07:15:04.015591 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 9 07:15:04.015605 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 07:15:04.015620 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 9 07:15:04.015635 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 9 07:15:04.015652 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 9 07:15:04.015666 systemd[1]: Mounting media.mount - External Media Directory... Oct 9 07:15:04.015682 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:15:04.015697 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 9 07:15:04.015712 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 9 07:15:04.015726 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 9 07:15:04.015743 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 9 07:15:04.015758 systemd[1]: Reached target machines.target - Containers. Oct 9 07:15:04.015775 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 9 07:15:04.015794 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 07:15:04.015809 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 9 07:15:04.015824 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 9 07:15:04.015839 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 07:15:04.015853 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 9 07:15:04.015868 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 07:15:04.015882 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 9 07:15:04.015897 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 07:15:04.015915 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 9 07:15:04.015930 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 9 07:15:04.015944 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 9 07:15:04.015959 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 9 07:15:04.015973 systemd[1]: Stopped systemd-fsck-usr.service. Oct 9 07:15:04.015988 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 9 07:15:04.016004 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 9 07:15:04.018066 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 9 07:15:04.018106 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 9 07:15:04.018123 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 9 07:15:04.018139 systemd[1]: verity-setup.service: Deactivated successfully. Oct 9 07:15:04.018154 systemd[1]: Stopped verity-setup.service. Oct 9 07:15:04.018169 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:15:04.018184 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 9 07:15:04.018199 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 9 07:15:04.018214 systemd[1]: Mounted media.mount - External Media Directory. Oct 9 07:15:04.018229 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 9 07:15:04.018246 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 9 07:15:04.018261 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 9 07:15:04.018276 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 07:15:04.018291 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 9 07:15:04.018306 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 9 07:15:04.018323 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 07:15:04.018338 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 07:15:04.018352 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 07:15:04.018368 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 07:15:04.018385 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 9 07:15:04.018405 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 9 07:15:04.018420 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 9 07:15:04.018435 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 9 07:15:04.018449 kernel: loop: module loaded Oct 9 07:15:04.018464 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 9 07:15:04.018509 systemd-journald[1077]: Collecting audit messages is disabled. Oct 9 07:15:04.018538 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 9 07:15:04.018554 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 9 07:15:04.018572 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Oct 9 07:15:04.018587 systemd-journald[1077]: Journal started Oct 9 07:15:04.018616 systemd-journald[1077]: Runtime Journal (/run/log/journal/6f23bd769364494fa8bfbddef332fbbd) is 4.9M, max 39.3M, 34.4M free. Oct 9 07:15:04.022090 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 9 07:15:03.640110 systemd[1]: Queued start job for default target multi-user.target. Oct 9 07:15:03.664669 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 9 07:15:03.665072 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 9 07:15:04.027207 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 9 07:15:04.027265 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 07:15:04.042250 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 9 07:15:04.048744 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 9 07:15:04.055076 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 9 07:15:04.063121 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 9 07:15:04.072053 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 9 07:15:04.078060 systemd[1]: Started systemd-journald.service - Journal Service. Oct 9 07:15:04.080502 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 07:15:04.080677 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 07:15:04.081521 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 9 07:15:04.083038 kernel: fuse: init (API version 7.39) Oct 9 07:15:04.085170 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 9 07:15:04.092976 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 9 07:15:04.093201 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 9 07:15:04.128151 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 9 07:15:04.136657 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 9 07:15:04.139126 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 9 07:15:04.140242 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 9 07:15:04.168058 kernel: loop0: detected capacity change from 0 to 8 Oct 9 07:15:04.158987 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 9 07:15:04.160545 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 9 07:15:04.180901 kernel: block loop0: the capability attribute has been deprecated. Oct 9 07:15:04.180230 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Oct 9 07:15:04.225074 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 9 07:15:04.225271 systemd-journald[1077]: Time spent on flushing to /var/log/journal/6f23bd769364494fa8bfbddef332fbbd is 53.374ms for 937 entries. Oct 9 07:15:04.225271 systemd-journald[1077]: System Journal (/var/log/journal/6f23bd769364494fa8bfbddef332fbbd) is 8.0M, max 584.8M, 576.8M free. Oct 9 07:15:04.326277 systemd-journald[1077]: Received client request to flush runtime journal. Oct 9 07:15:04.326402 kernel: ACPI: bus type drm_connector registered Oct 9 07:15:04.326437 kernel: loop1: detected capacity change from 0 to 205544 Oct 9 07:15:04.326464 kernel: loop2: detected capacity change from 0 to 80568 Oct 9 07:15:04.211425 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 9 07:15:04.253361 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 9 07:15:04.256557 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 07:15:04.258595 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 9 07:15:04.258747 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 9 07:15:04.261519 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 9 07:15:04.267327 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 9 07:15:04.268361 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Oct 9 07:15:04.280212 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Oct 9 07:15:04.306998 udevadm[1145]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Oct 9 07:15:04.328682 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 9 07:15:04.350797 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 9 07:15:04.359468 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 9 07:15:04.390120 kernel: loop3: detected capacity change from 0 to 139904 Oct 9 07:15:04.404365 systemd-tmpfiles[1151]: ACLs are not supported, ignoring. Oct 9 07:15:04.404388 systemd-tmpfiles[1151]: ACLs are not supported, ignoring. Oct 9 07:15:04.422084 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 07:15:04.472938 kernel: loop4: detected capacity change from 0 to 8 Oct 9 07:15:04.477158 kernel: loop5: detected capacity change from 0 to 205544 Oct 9 07:15:04.806115 kernel: loop6: detected capacity change from 0 to 80568 Oct 9 07:15:05.504287 kernel: loop7: detected capacity change from 0 to 139904 Oct 9 07:15:05.548281 (sd-merge)[1157]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Oct 9 07:15:05.552282 (sd-merge)[1157]: Merged extensions into '/usr'. Oct 9 07:15:05.564469 systemd[1]: Reloading requested from client PID 1105 ('systemd-sysext') (unit systemd-sysext.service)... Oct 9 07:15:05.564705 systemd[1]: Reloading... Oct 9 07:15:05.656937 zram_generator::config[1181]: No configuration found. Oct 9 07:15:05.894586 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 07:15:05.919056 ldconfig[1097]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 9 07:15:05.959077 systemd[1]: Reloading finished in 393 ms. Oct 9 07:15:05.988904 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 9 07:15:05.995196 systemd[1]: Starting ensure-sysext.service... Oct 9 07:15:06.003278 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Oct 9 07:15:06.009097 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 9 07:15:06.028381 systemd[1]: Reloading requested from client PID 1236 ('systemctl') (unit ensure-sysext.service)... Oct 9 07:15:06.028407 systemd[1]: Reloading... Oct 9 07:15:06.040805 systemd-tmpfiles[1237]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 9 07:15:06.042286 systemd-tmpfiles[1237]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 9 07:15:06.045416 systemd-tmpfiles[1237]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 9 07:15:06.046164 systemd-tmpfiles[1237]: ACLs are not supported, ignoring. Oct 9 07:15:06.046276 systemd-tmpfiles[1237]: ACLs are not supported, ignoring. Oct 9 07:15:06.052307 systemd-tmpfiles[1237]: Detected autofs mount point /boot during canonicalization of boot. Oct 9 07:15:06.052459 systemd-tmpfiles[1237]: Skipping /boot Oct 9 07:15:06.065860 systemd-tmpfiles[1237]: Detected autofs mount point /boot during canonicalization of boot. Oct 9 07:15:06.067811 systemd-tmpfiles[1237]: Skipping /boot Oct 9 07:15:06.113062 zram_generator::config[1263]: No configuration found. Oct 9 07:15:06.254582 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 07:15:06.318121 systemd[1]: Reloading finished in 289 ms. Oct 9 07:15:06.333473 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 9 07:15:06.334525 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Oct 9 07:15:06.349254 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 9 07:15:06.351772 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 9 07:15:06.359513 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 9 07:15:06.365199 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 9 07:15:06.376256 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 07:15:06.381735 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 9 07:15:06.396344 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 9 07:15:06.400469 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:15:06.400653 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 07:15:06.404813 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 07:15:06.409697 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 07:15:06.416870 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 07:15:06.418175 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 07:15:06.418319 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:15:06.420276 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:15:06.420460 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 07:15:06.420631 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 07:15:06.420752 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:15:06.426985 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:15:06.427574 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 07:15:06.431446 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 9 07:15:06.433831 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 07:15:06.434069 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:15:06.434877 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 07:15:06.435078 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 07:15:06.438650 systemd[1]: Finished ensure-sysext.service. Oct 9 07:15:06.448435 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 9 07:15:06.451966 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 9 07:15:06.466426 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 07:15:06.466635 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 07:15:06.467467 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 9 07:15:06.473179 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 07:15:06.473351 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 07:15:06.474346 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 9 07:15:06.480546 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 9 07:15:06.480773 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 9 07:15:06.485140 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 9 07:15:06.491506 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 9 07:15:06.513631 systemd-udevd[1326]: Using default interface naming scheme 'v255'. Oct 9 07:15:06.515933 augenrules[1356]: No rules Oct 9 07:15:06.516117 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 9 07:15:06.518001 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 9 07:15:06.529111 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 9 07:15:06.538944 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 9 07:15:06.540140 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 9 07:15:06.561155 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 07:15:06.570162 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 9 07:15:06.611439 systemd-resolved[1325]: Positive Trust Anchors: Oct 9 07:15:06.611460 systemd-resolved[1325]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 9 07:15:06.611505 systemd-resolved[1325]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Oct 9 07:15:06.616806 systemd-resolved[1325]: Using system hostname 'ci-3975-2-2-4-dcc5873578.novalocal'. Oct 9 07:15:06.618283 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 9 07:15:06.626854 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 9 07:15:06.658969 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 9 07:15:06.660178 systemd[1]: Reached target time-set.target - System Time Set. Oct 9 07:15:06.701060 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1371) Oct 9 07:15:06.706606 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 9 07:15:06.707636 systemd-networkd[1373]: lo: Link UP Oct 9 07:15:06.707647 systemd-networkd[1373]: lo: Gained carrier Oct 9 07:15:06.709457 systemd-networkd[1373]: Enumeration completed Oct 9 07:15:06.709578 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 9 07:15:06.710255 systemd-networkd[1373]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 07:15:06.710259 systemd-networkd[1373]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 9 07:15:06.710602 systemd[1]: Reached target network.target - Network. Oct 9 07:15:06.711569 systemd-networkd[1373]: eth0: Link UP Oct 9 07:15:06.711582 systemd-networkd[1373]: eth0: Gained carrier Oct 9 07:15:06.711601 systemd-networkd[1373]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 07:15:06.717053 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1386) Oct 9 07:15:06.719261 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 9 07:15:06.792059 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Oct 9 07:15:06.793159 systemd-networkd[1373]: eth0: DHCPv4 address 172.24.4.220/24, gateway 172.24.4.1 acquired from 172.24.4.1 Oct 9 07:15:06.795414 systemd-timesyncd[1343]: Network configuration changed, trying to establish connection. Oct 9 07:15:06.808048 systemd-networkd[1373]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 07:15:06.811263 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 9 07:15:06.818573 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 9 07:15:06.832175 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Oct 9 07:15:06.847181 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Oct 9 07:15:06.858410 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 9 07:15:06.868080 kernel: ACPI: button: Power Button [PWRF] Oct 9 07:15:06.873172 kernel: mousedev: PS/2 mouse device common for all mice Oct 9 07:15:06.884134 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 07:15:06.893384 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Oct 9 07:15:06.893442 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Oct 9 07:15:06.898047 kernel: Console: switching to colour dummy device 80x25 Oct 9 07:15:06.905398 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Oct 9 07:15:06.905460 kernel: [drm] features: -context_init Oct 9 07:15:06.905211 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 07:15:06.905633 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 07:15:06.907496 kernel: [drm] number of scanouts: 1 Oct 9 07:15:06.907534 kernel: [drm] number of cap sets: 0 Oct 9 07:15:06.907550 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Oct 9 07:15:06.915036 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Oct 9 07:15:06.915114 kernel: Console: switching to colour frame buffer device 128x48 Oct 9 07:15:06.917340 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 07:15:06.922738 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Oct 9 07:15:06.934345 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 07:15:06.934635 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 07:15:06.940184 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 07:15:06.940817 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Oct 9 07:15:06.945210 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Oct 9 07:15:06.967676 lvm[1411]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 9 07:15:06.997667 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Oct 9 07:15:06.999555 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 9 07:15:07.004222 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Oct 9 07:15:07.011075 lvm[1415]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 9 07:15:07.021476 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 07:15:07.022616 systemd[1]: Reached target sysinit.target - System Initialization. Oct 9 07:15:07.022880 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 9 07:15:07.023038 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 9 07:15:07.023589 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 9 07:15:07.023838 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 9 07:15:07.024001 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 9 07:15:07.024234 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 9 07:15:07.024982 systemd[1]: Reached target paths.target - Path Units. Oct 9 07:15:07.025139 systemd[1]: Reached target timers.target - Timer Units. Oct 9 07:15:07.026786 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 9 07:15:07.028407 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 9 07:15:07.038470 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 9 07:15:07.039346 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Oct 9 07:15:07.042483 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 9 07:15:07.044742 systemd[1]: Reached target sockets.target - Socket Units. Oct 9 07:15:07.045958 systemd[1]: Reached target basic.target - Basic System. Oct 9 07:15:07.048288 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 9 07:15:07.049696 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 9 07:15:07.056120 systemd[1]: Starting containerd.service - containerd container runtime... Oct 9 07:15:07.060118 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Oct 9 07:15:07.068336 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 9 07:15:07.072688 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 9 07:15:07.081312 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 9 07:15:07.081943 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 9 07:15:07.087273 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 9 07:15:07.092438 jq[1425]: false Oct 9 07:15:07.100192 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 9 07:15:07.107729 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 9 07:15:07.118302 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 9 07:15:07.118511 extend-filesystems[1428]: Found loop4 Oct 9 07:15:07.121936 extend-filesystems[1428]: Found loop5 Oct 9 07:15:07.121936 extend-filesystems[1428]: Found loop6 Oct 9 07:15:07.121936 extend-filesystems[1428]: Found loop7 Oct 9 07:15:07.121936 extend-filesystems[1428]: Found vda Oct 9 07:15:07.121936 extend-filesystems[1428]: Found vda1 Oct 9 07:15:07.121936 extend-filesystems[1428]: Found vda2 Oct 9 07:15:07.121936 extend-filesystems[1428]: Found vda3 Oct 9 07:15:07.121936 extend-filesystems[1428]: Found usr Oct 9 07:15:07.121936 extend-filesystems[1428]: Found vda4 Oct 9 07:15:07.121936 extend-filesystems[1428]: Found vda6 Oct 9 07:15:07.121936 extend-filesystems[1428]: Found vda7 Oct 9 07:15:07.121936 extend-filesystems[1428]: Found vda9 Oct 9 07:15:07.121936 extend-filesystems[1428]: Checking size of /dev/vda9 Oct 9 07:15:07.240900 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 4635643 blocks Oct 9 07:15:07.139056 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 9 07:15:07.241403 extend-filesystems[1428]: Resized partition /dev/vda9 Oct 9 07:15:07.146151 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 9 07:15:07.246683 extend-filesystems[1454]: resize2fs 1.47.0 (5-Feb-2023) Oct 9 07:15:07.146763 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 9 07:15:07.272003 dbus-daemon[1424]: [system] SELinux support is enabled Oct 9 07:15:07.285144 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1381) Oct 9 07:15:07.148322 systemd[1]: Starting update-engine.service - Update Engine... Oct 9 07:15:07.166841 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 9 07:15:07.285576 update_engine[1441]: I1009 07:15:07.200299 1441 main.cc:92] Flatcar Update Engine starting Oct 9 07:15:07.187087 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 9 07:15:07.304493 update_engine[1441]: I1009 07:15:07.297247 1441 update_check_scheduler.cc:74] Next update check in 9m3s Oct 9 07:15:07.304540 jq[1446]: true Oct 9 07:15:07.187969 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 9 07:15:07.304792 jq[1451]: true Oct 9 07:15:07.199855 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 9 07:15:07.202514 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 9 07:15:07.212072 systemd[1]: motdgen.service: Deactivated successfully. Oct 9 07:15:07.212254 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 9 07:15:07.278097 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 9 07:15:07.279604 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 9 07:15:07.284783 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 9 07:15:07.284816 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 9 07:15:07.290426 (ntainerd)[1457]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 9 07:15:07.294181 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 9 07:15:07.294210 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 9 07:15:07.302862 systemd[1]: Started update-engine.service - Update Engine. Oct 9 07:15:07.341138 tar[1450]: linux-amd64/helm Oct 9 07:15:07.339358 systemd-logind[1438]: New seat seat0. Oct 9 07:15:07.457204 kernel: EXT4-fs (vda9): resized filesystem to 4635643 Oct 9 07:15:07.340985 systemd-logind[1438]: Watching system buttons on /dev/input/event1 (Power Button) Oct 9 07:15:07.341002 systemd-logind[1438]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 9 07:15:07.347625 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 9 07:15:07.348338 systemd[1]: Started systemd-logind.service - User Login Management. Oct 9 07:15:07.444164 locksmithd[1468]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 9 07:15:07.469270 extend-filesystems[1454]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 9 07:15:07.469270 extend-filesystems[1454]: old_desc_blocks = 1, new_desc_blocks = 3 Oct 9 07:15:07.469270 extend-filesystems[1454]: The filesystem on /dev/vda9 is now 4635643 (4k) blocks long. Oct 9 07:15:07.497422 extend-filesystems[1428]: Resized filesystem in /dev/vda9 Oct 9 07:15:07.503218 bash[1481]: Updated "/home/core/.ssh/authorized_keys" Oct 9 07:15:07.472421 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 9 07:15:07.472703 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 9 07:15:07.494190 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 9 07:15:07.507362 systemd[1]: Starting sshkeys.service... Oct 9 07:15:07.536374 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Oct 9 07:15:07.548383 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Oct 9 07:15:07.897887 containerd[1457]: time="2024-10-09T07:15:07.897776956Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Oct 9 07:15:07.974234 containerd[1457]: time="2024-10-09T07:15:07.974150975Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Oct 9 07:15:07.978344 containerd[1457]: time="2024-10-09T07:15:07.978044377Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 9 07:15:07.982267 containerd[1457]: time="2024-10-09T07:15:07.982228134Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.54-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 9 07:15:07.983542 containerd[1457]: time="2024-10-09T07:15:07.982338681Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 9 07:15:07.983542 containerd[1457]: time="2024-10-09T07:15:07.982611473Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 07:15:07.983542 containerd[1457]: time="2024-10-09T07:15:07.982632121Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 9 07:15:07.983542 containerd[1457]: time="2024-10-09T07:15:07.982736768Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Oct 9 07:15:07.983542 containerd[1457]: time="2024-10-09T07:15:07.982812359Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 07:15:07.983542 containerd[1457]: time="2024-10-09T07:15:07.982829512Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 9 07:15:07.983542 containerd[1457]: time="2024-10-09T07:15:07.982920532Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 9 07:15:07.983542 containerd[1457]: time="2024-10-09T07:15:07.983179889Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 9 07:15:07.983542 containerd[1457]: time="2024-10-09T07:15:07.983200938Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Oct 9 07:15:07.983542 containerd[1457]: time="2024-10-09T07:15:07.983213121Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 9 07:15:07.983542 containerd[1457]: time="2024-10-09T07:15:07.983309201Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 07:15:07.983805 containerd[1457]: time="2024-10-09T07:15:07.983325993Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 9 07:15:07.983805 containerd[1457]: time="2024-10-09T07:15:07.983381166Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Oct 9 07:15:07.983805 containerd[1457]: time="2024-10-09T07:15:07.983396405Z" level=info msg="metadata content store policy set" policy=shared Oct 9 07:15:07.991298 containerd[1457]: time="2024-10-09T07:15:07.991188699Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 9 07:15:07.991298 containerd[1457]: time="2024-10-09T07:15:07.991233964Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 9 07:15:07.991298 containerd[1457]: time="2024-10-09T07:15:07.991260354Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 9 07:15:07.991468 containerd[1457]: time="2024-10-09T07:15:07.991451622Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Oct 9 07:15:07.991579 containerd[1457]: time="2024-10-09T07:15:07.991563031Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Oct 9 07:15:07.992220 containerd[1457]: time="2024-10-09T07:15:07.991652469Z" level=info msg="NRI interface is disabled by configuration." Oct 9 07:15:07.992220 containerd[1457]: time="2024-10-09T07:15:07.991676534Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 9 07:15:07.992220 containerd[1457]: time="2024-10-09T07:15:07.991819893Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Oct 9 07:15:07.992220 containerd[1457]: time="2024-10-09T07:15:07.991839750Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Oct 9 07:15:07.992220 containerd[1457]: time="2024-10-09T07:15:07.991854698Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Oct 9 07:15:07.992220 containerd[1457]: time="2024-10-09T07:15:07.991870007Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Oct 9 07:15:07.992220 containerd[1457]: time="2024-10-09T07:15:07.991887360Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 9 07:15:07.992220 containerd[1457]: time="2024-10-09T07:15:07.991905604Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 9 07:15:07.992220 containerd[1457]: time="2024-10-09T07:15:07.991920712Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 9 07:15:07.992220 containerd[1457]: time="2024-10-09T07:15:07.991935710Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 9 07:15:07.992220 containerd[1457]: time="2024-10-09T07:15:07.991952081Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 9 07:15:07.992220 containerd[1457]: time="2024-10-09T07:15:07.991967380Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 9 07:15:07.992220 containerd[1457]: time="2024-10-09T07:15:07.991982458Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 9 07:15:07.992220 containerd[1457]: time="2024-10-09T07:15:07.991995913Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 9 07:15:07.992514 containerd[1457]: time="2024-10-09T07:15:07.992132339Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 9 07:15:07.995038 containerd[1457]: time="2024-10-09T07:15:07.994129635Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 9 07:15:07.995038 containerd[1457]: time="2024-10-09T07:15:07.994168177Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 9 07:15:07.995038 containerd[1457]: time="2024-10-09T07:15:07.994193395Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Oct 9 07:15:07.995038 containerd[1457]: time="2024-10-09T07:15:07.994220796Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 9 07:15:07.995038 containerd[1457]: time="2024-10-09T07:15:07.994294414Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 9 07:15:07.995038 containerd[1457]: time="2024-10-09T07:15:07.994312167Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 9 07:15:07.995038 containerd[1457]: time="2024-10-09T07:15:07.994326093Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 9 07:15:07.995038 containerd[1457]: time="2024-10-09T07:15:07.994409209Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 9 07:15:07.995038 containerd[1457]: time="2024-10-09T07:15:07.994428596Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 9 07:15:07.995038 containerd[1457]: time="2024-10-09T07:15:07.994443213Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 9 07:15:07.995038 containerd[1457]: time="2024-10-09T07:15:07.994458692Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 9 07:15:07.995038 containerd[1457]: time="2024-10-09T07:15:07.994472368Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 9 07:15:07.995038 containerd[1457]: time="2024-10-09T07:15:07.994487276Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 9 07:15:07.995038 containerd[1457]: time="2024-10-09T07:15:07.994697931Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Oct 9 07:15:07.995375 containerd[1457]: time="2024-10-09T07:15:07.994751291Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Oct 9 07:15:07.995375 containerd[1457]: time="2024-10-09T07:15:07.994767171Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 9 07:15:07.995375 containerd[1457]: time="2024-10-09T07:15:07.994781938Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Oct 9 07:15:07.995375 containerd[1457]: time="2024-10-09T07:15:07.994798950Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 9 07:15:07.995375 containerd[1457]: time="2024-10-09T07:15:07.994815321Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Oct 9 07:15:07.995375 containerd[1457]: time="2024-10-09T07:15:07.994829538Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 9 07:15:07.995375 containerd[1457]: time="2024-10-09T07:15:07.994841971Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 9 07:15:07.996340 containerd[1457]: time="2024-10-09T07:15:07.996267134Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 9 07:15:07.998595 containerd[1457]: time="2024-10-09T07:15:07.998052783Z" level=info msg="Connect containerd service" Oct 9 07:15:07.998595 containerd[1457]: time="2024-10-09T07:15:07.998102807Z" level=info msg="using legacy CRI server" Oct 9 07:15:07.998595 containerd[1457]: time="2024-10-09T07:15:07.998111593Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 9 07:15:07.998595 containerd[1457]: time="2024-10-09T07:15:07.998254171Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 9 07:15:07.999160 containerd[1457]: time="2024-10-09T07:15:07.999132668Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 9 07:15:07.999293 containerd[1457]: time="2024-10-09T07:15:07.999274544Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 9 07:15:08.002043 containerd[1457]: time="2024-10-09T07:15:08.001060303Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Oct 9 07:15:08.002043 containerd[1457]: time="2024-10-09T07:15:08.001085391Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 9 07:15:08.002043 containerd[1457]: time="2024-10-09T07:15:07.999375614Z" level=info msg="Start subscribing containerd event" Oct 9 07:15:08.002043 containerd[1457]: time="2024-10-09T07:15:08.001201458Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Oct 9 07:15:08.002043 containerd[1457]: time="2024-10-09T07:15:08.001693752Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 9 07:15:08.002043 containerd[1457]: time="2024-10-09T07:15:08.001741922Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 9 07:15:08.002260 containerd[1457]: time="2024-10-09T07:15:08.002242170Z" level=info msg="Start recovering state" Oct 9 07:15:08.002382 containerd[1457]: time="2024-10-09T07:15:08.002366273Z" level=info msg="Start event monitor" Oct 9 07:15:08.002450 containerd[1457]: time="2024-10-09T07:15:08.002436575Z" level=info msg="Start snapshots syncer" Oct 9 07:15:08.002505 containerd[1457]: time="2024-10-09T07:15:08.002492640Z" level=info msg="Start cni network conf syncer for default" Oct 9 07:15:08.002557 containerd[1457]: time="2024-10-09T07:15:08.002545259Z" level=info msg="Start streaming server" Oct 9 07:15:08.002676 containerd[1457]: time="2024-10-09T07:15:08.002660505Z" level=info msg="containerd successfully booted in 0.110294s" Oct 9 07:15:08.002785 systemd[1]: Started containerd.service - containerd container runtime. Oct 9 07:15:08.108590 tar[1450]: linux-amd64/LICENSE Oct 9 07:15:08.108706 tar[1450]: linux-amd64/README.md Oct 9 07:15:08.125949 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 9 07:15:08.279142 sshd_keygen[1458]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 9 07:15:08.308118 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 9 07:15:08.319609 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 9 07:15:08.326159 systemd[1]: Started sshd@0-172.24.4.220:22-172.24.4.1:54200.service - OpenSSH per-connection server daemon (172.24.4.1:54200). Oct 9 07:15:08.327875 systemd[1]: issuegen.service: Deactivated successfully. Oct 9 07:15:08.329614 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 9 07:15:08.343406 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 9 07:15:08.372148 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 9 07:15:08.384856 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 9 07:15:08.395670 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 9 07:15:08.396746 systemd[1]: Reached target getty.target - Login Prompts. Oct 9 07:15:08.601410 systemd-networkd[1373]: eth0: Gained IPv6LL Oct 9 07:15:08.602665 systemd-timesyncd[1343]: Network configuration changed, trying to establish connection. Oct 9 07:15:08.605506 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 9 07:15:08.613914 systemd[1]: Reached target network-online.target - Network is Online. Oct 9 07:15:08.625627 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:15:08.642583 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 9 07:15:08.687563 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 9 07:15:09.286323 sshd[1515]: Accepted publickey for core from 172.24.4.1 port 54200 ssh2: RSA SHA256:iTqmmSA9RcIkWmF7myyFAqWL8kdaKMdVpBUk8UaNQPM Oct 9 07:15:09.290770 sshd[1515]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:15:09.323266 systemd-logind[1438]: New session 1 of user core. Oct 9 07:15:09.327459 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 9 07:15:09.347779 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 9 07:15:09.360626 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 9 07:15:09.374462 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 9 07:15:09.379687 (systemd)[1538]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:15:09.502666 systemd[1538]: Queued start job for default target default.target. Oct 9 07:15:09.509205 systemd[1538]: Created slice app.slice - User Application Slice. Oct 9 07:15:09.509328 systemd[1538]: Reached target paths.target - Paths. Oct 9 07:15:09.509416 systemd[1538]: Reached target timers.target - Timers. Oct 9 07:15:09.514143 systemd[1538]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 9 07:15:09.525351 systemd[1538]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 9 07:15:09.525633 systemd[1538]: Reached target sockets.target - Sockets. Oct 9 07:15:09.525657 systemd[1538]: Reached target basic.target - Basic System. Oct 9 07:15:09.525707 systemd[1538]: Reached target default.target - Main User Target. Oct 9 07:15:09.525742 systemd[1538]: Startup finished in 139ms. Oct 9 07:15:09.525837 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 9 07:15:09.541582 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 9 07:15:10.060175 systemd[1]: Started sshd@1-172.24.4.220:22-172.24.4.1:59706.service - OpenSSH per-connection server daemon (172.24.4.1:59706). Oct 9 07:15:10.522304 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:15:10.540003 (kubelet)[1556]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 07:15:11.966817 sshd[1549]: Accepted publickey for core from 172.24.4.1 port 59706 ssh2: RSA SHA256:iTqmmSA9RcIkWmF7myyFAqWL8kdaKMdVpBUk8UaNQPM Oct 9 07:15:11.971444 sshd[1549]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:15:11.983341 systemd-logind[1438]: New session 2 of user core. Oct 9 07:15:11.992496 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 9 07:15:12.016780 kubelet[1556]: E1009 07:15:12.016740 1556 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 07:15:12.020689 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 07:15:12.020993 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 07:15:12.021583 systemd[1]: kubelet.service: Consumed 2.084s CPU time. Oct 9 07:15:12.734390 sshd[1549]: pam_unix(sshd:session): session closed for user core Oct 9 07:15:12.743866 systemd[1]: sshd@1-172.24.4.220:22-172.24.4.1:59706.service: Deactivated successfully. Oct 9 07:15:12.747321 systemd[1]: session-2.scope: Deactivated successfully. Oct 9 07:15:12.749212 systemd-logind[1438]: Session 2 logged out. Waiting for processes to exit. Oct 9 07:15:12.760865 systemd[1]: Started sshd@2-172.24.4.220:22-172.24.4.1:59718.service - OpenSSH per-connection server daemon (172.24.4.1:59718). Oct 9 07:15:12.772984 systemd-logind[1438]: Removed session 2. Oct 9 07:15:13.448161 login[1523]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Oct 9 07:15:13.451567 login[1522]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Oct 9 07:15:13.459964 systemd-logind[1438]: New session 3 of user core. Oct 9 07:15:13.477561 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 9 07:15:13.484503 systemd-logind[1438]: New session 4 of user core. Oct 9 07:15:13.493669 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 9 07:15:14.170518 coreos-metadata[1423]: Oct 09 07:15:14.170 WARN failed to locate config-drive, using the metadata service API instead Oct 9 07:15:14.194910 sshd[1571]: Accepted publickey for core from 172.24.4.1 port 59718 ssh2: RSA SHA256:iTqmmSA9RcIkWmF7myyFAqWL8kdaKMdVpBUk8UaNQPM Oct 9 07:15:14.197769 sshd[1571]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:15:14.208946 systemd-logind[1438]: New session 5 of user core. Oct 9 07:15:14.218413 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 9 07:15:14.254616 coreos-metadata[1423]: Oct 09 07:15:14.254 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Oct 9 07:15:14.644755 coreos-metadata[1492]: Oct 09 07:15:14.644 WARN failed to locate config-drive, using the metadata service API instead Oct 9 07:15:14.661099 coreos-metadata[1423]: Oct 09 07:15:14.660 INFO Fetch successful Oct 9 07:15:14.661099 coreos-metadata[1423]: Oct 09 07:15:14.661 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Oct 9 07:15:14.676097 coreos-metadata[1423]: Oct 09 07:15:14.675 INFO Fetch successful Oct 9 07:15:14.676340 coreos-metadata[1423]: Oct 09 07:15:14.676 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Oct 9 07:15:14.687074 coreos-metadata[1492]: Oct 09 07:15:14.686 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Oct 9 07:15:14.695167 coreos-metadata[1423]: Oct 09 07:15:14.695 INFO Fetch successful Oct 9 07:15:14.695167 coreos-metadata[1423]: Oct 09 07:15:14.695 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Oct 9 07:15:14.703337 coreos-metadata[1492]: Oct 09 07:15:14.703 INFO Fetch successful Oct 9 07:15:14.703337 coreos-metadata[1492]: Oct 09 07:15:14.703 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Oct 9 07:15:14.711598 coreos-metadata[1423]: Oct 09 07:15:14.711 INFO Fetch successful Oct 9 07:15:14.711598 coreos-metadata[1423]: Oct 09 07:15:14.711 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Oct 9 07:15:14.720829 coreos-metadata[1492]: Oct 09 07:15:14.720 INFO Fetch successful Oct 9 07:15:14.727516 coreos-metadata[1423]: Oct 09 07:15:14.727 INFO Fetch successful Oct 9 07:15:14.727516 coreos-metadata[1423]: Oct 09 07:15:14.727 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Oct 9 07:15:14.730195 unknown[1492]: wrote ssh authorized keys file for user: core Oct 9 07:15:14.744635 coreos-metadata[1423]: Oct 09 07:15:14.744 INFO Fetch successful Oct 9 07:15:14.775079 update-ssh-keys[1597]: Updated "/home/core/.ssh/authorized_keys" Oct 9 07:15:14.774433 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Oct 9 07:15:14.783012 systemd[1]: Finished sshkeys.service. Oct 9 07:15:14.806534 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Oct 9 07:15:14.807594 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 9 07:15:14.807974 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 9 07:15:14.812306 systemd[1]: Startup finished in 1.156s (kernel) + 15.109s (initrd) + 11.961s (userspace) = 28.228s. Oct 9 07:15:14.885462 sshd[1571]: pam_unix(sshd:session): session closed for user core Oct 9 07:15:14.891056 systemd-logind[1438]: Session 5 logged out. Waiting for processes to exit. Oct 9 07:15:14.891638 systemd[1]: sshd@2-172.24.4.220:22-172.24.4.1:59718.service: Deactivated successfully. Oct 9 07:15:14.894658 systemd[1]: session-5.scope: Deactivated successfully. Oct 9 07:15:14.898524 systemd-logind[1438]: Removed session 5. Oct 9 07:15:22.107429 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 9 07:15:22.117428 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:15:22.500922 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:15:22.517966 (kubelet)[1616]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 07:15:22.713065 kubelet[1616]: E1009 07:15:22.712887 1616 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 07:15:22.720344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 07:15:22.720764 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 07:15:24.906557 systemd[1]: Started sshd@3-172.24.4.220:22-172.24.4.1:60642.service - OpenSSH per-connection server daemon (172.24.4.1:60642). Oct 9 07:15:26.191933 sshd[1624]: Accepted publickey for core from 172.24.4.1 port 60642 ssh2: RSA SHA256:iTqmmSA9RcIkWmF7myyFAqWL8kdaKMdVpBUk8UaNQPM Oct 9 07:15:26.194519 sshd[1624]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:15:26.204897 systemd-logind[1438]: New session 6 of user core. Oct 9 07:15:26.211304 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 9 07:15:26.964155 sshd[1624]: pam_unix(sshd:session): session closed for user core Oct 9 07:15:26.976367 systemd[1]: sshd@3-172.24.4.220:22-172.24.4.1:60642.service: Deactivated successfully. Oct 9 07:15:26.981174 systemd[1]: session-6.scope: Deactivated successfully. Oct 9 07:15:26.984342 systemd-logind[1438]: Session 6 logged out. Waiting for processes to exit. Oct 9 07:15:26.992610 systemd[1]: Started sshd@4-172.24.4.220:22-172.24.4.1:60646.service - OpenSSH per-connection server daemon (172.24.4.1:60646). Oct 9 07:15:26.994899 systemd-logind[1438]: Removed session 6. Oct 9 07:15:28.231157 sshd[1631]: Accepted publickey for core from 172.24.4.1 port 60646 ssh2: RSA SHA256:iTqmmSA9RcIkWmF7myyFAqWL8kdaKMdVpBUk8UaNQPM Oct 9 07:15:28.234423 sshd[1631]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:15:28.244139 systemd-logind[1438]: New session 7 of user core. Oct 9 07:15:28.250274 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 9 07:15:28.955704 sshd[1631]: pam_unix(sshd:session): session closed for user core Oct 9 07:15:28.966143 systemd[1]: sshd@4-172.24.4.220:22-172.24.4.1:60646.service: Deactivated successfully. Oct 9 07:15:28.969372 systemd[1]: session-7.scope: Deactivated successfully. Oct 9 07:15:28.970931 systemd-logind[1438]: Session 7 logged out. Waiting for processes to exit. Oct 9 07:15:28.979593 systemd[1]: Started sshd@5-172.24.4.220:22-172.24.4.1:60654.service - OpenSSH per-connection server daemon (172.24.4.1:60654). Oct 9 07:15:28.982957 systemd-logind[1438]: Removed session 7. Oct 9 07:15:30.564225 sshd[1638]: Accepted publickey for core from 172.24.4.1 port 60654 ssh2: RSA SHA256:iTqmmSA9RcIkWmF7myyFAqWL8kdaKMdVpBUk8UaNQPM Oct 9 07:15:30.566912 sshd[1638]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:15:30.577471 systemd-logind[1438]: New session 8 of user core. Oct 9 07:15:30.586329 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 9 07:15:31.369383 sshd[1638]: pam_unix(sshd:session): session closed for user core Oct 9 07:15:31.381315 systemd[1]: sshd@5-172.24.4.220:22-172.24.4.1:60654.service: Deactivated successfully. Oct 9 07:15:31.385220 systemd[1]: session-8.scope: Deactivated successfully. Oct 9 07:15:31.388123 systemd-logind[1438]: Session 8 logged out. Waiting for processes to exit. Oct 9 07:15:31.393575 systemd[1]: Started sshd@6-172.24.4.220:22-172.24.4.1:60658.service - OpenSSH per-connection server daemon (172.24.4.1:60658). Oct 9 07:15:31.396826 systemd-logind[1438]: Removed session 8. Oct 9 07:15:32.770335 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 9 07:15:32.789224 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:15:33.108386 sshd[1645]: Accepted publickey for core from 172.24.4.1 port 60658 ssh2: RSA SHA256:iTqmmSA9RcIkWmF7myyFAqWL8kdaKMdVpBUk8UaNQPM Oct 9 07:15:33.111339 sshd[1645]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:15:33.126136 systemd-logind[1438]: New session 9 of user core. Oct 9 07:15:33.130710 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 9 07:15:33.188354 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:15:33.188767 (kubelet)[1656]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 07:15:33.391639 kubelet[1656]: E1009 07:15:33.391409 1656 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 07:15:33.395450 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 07:15:33.395737 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 07:15:33.577142 sudo[1663]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 9 07:15:33.577762 sudo[1663]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 9 07:15:33.596318 sudo[1663]: pam_unix(sudo:session): session closed for user root Oct 9 07:15:33.875525 sshd[1645]: pam_unix(sshd:session): session closed for user core Oct 9 07:15:33.889928 systemd[1]: sshd@6-172.24.4.220:22-172.24.4.1:60658.service: Deactivated successfully. Oct 9 07:15:33.893447 systemd[1]: session-9.scope: Deactivated successfully. Oct 9 07:15:33.897334 systemd-logind[1438]: Session 9 logged out. Waiting for processes to exit. Oct 9 07:15:33.907556 systemd[1]: Started sshd@7-172.24.4.220:22-172.24.4.1:60668.service - OpenSSH per-connection server daemon (172.24.4.1:60668). Oct 9 07:15:33.910349 systemd-logind[1438]: Removed session 9. Oct 9 07:15:35.542983 sshd[1668]: Accepted publickey for core from 172.24.4.1 port 60668 ssh2: RSA SHA256:iTqmmSA9RcIkWmF7myyFAqWL8kdaKMdVpBUk8UaNQPM Oct 9 07:15:35.546427 sshd[1668]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:15:35.558605 systemd-logind[1438]: New session 10 of user core. Oct 9 07:15:35.570497 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 9 07:15:35.913256 sudo[1672]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 9 07:15:35.913865 sudo[1672]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 9 07:15:35.923140 sudo[1672]: pam_unix(sudo:session): session closed for user root Oct 9 07:15:35.939259 sudo[1671]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 9 07:15:35.940182 sudo[1671]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 9 07:15:35.980807 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Oct 9 07:15:35.982619 auditctl[1675]: No rules Oct 9 07:15:35.983227 systemd[1]: audit-rules.service: Deactivated successfully. Oct 9 07:15:35.983620 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Oct 9 07:15:35.988754 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 9 07:15:36.046665 augenrules[1693]: No rules Oct 9 07:15:36.048986 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 9 07:15:36.052277 sudo[1671]: pam_unix(sudo:session): session closed for user root Oct 9 07:15:36.286950 sshd[1668]: pam_unix(sshd:session): session closed for user core Oct 9 07:15:36.298725 systemd[1]: sshd@7-172.24.4.220:22-172.24.4.1:60668.service: Deactivated successfully. Oct 9 07:15:36.301653 systemd[1]: session-10.scope: Deactivated successfully. Oct 9 07:15:36.304918 systemd-logind[1438]: Session 10 logged out. Waiting for processes to exit. Oct 9 07:15:36.314563 systemd[1]: Started sshd@8-172.24.4.220:22-172.24.4.1:46420.service - OpenSSH per-connection server daemon (172.24.4.1:46420). Oct 9 07:15:36.317120 systemd-logind[1438]: Removed session 10. Oct 9 07:15:37.638521 sshd[1701]: Accepted publickey for core from 172.24.4.1 port 46420 ssh2: RSA SHA256:iTqmmSA9RcIkWmF7myyFAqWL8kdaKMdVpBUk8UaNQPM Oct 9 07:15:37.641333 sshd[1701]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:15:37.653008 systemd-logind[1438]: New session 11 of user core. Oct 9 07:15:37.662396 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 9 07:15:38.230347 sudo[1704]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 9 07:15:38.230946 sudo[1704]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 9 07:15:38.508529 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 9 07:15:38.508783 (dockerd)[1713]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 9 07:15:38.639129 systemd-timesyncd[1343]: Contacted time server 45.13.105.44:123 (2.flatcar.pool.ntp.org). Oct 9 07:15:38.639225 systemd-timesyncd[1343]: Initial clock synchronization to Wed 2024-10-09 07:15:38.688726 UTC. Oct 9 07:15:39.080746 dockerd[1713]: time="2024-10-09T07:15:39.080685649Z" level=info msg="Starting up" Oct 9 07:15:39.120797 systemd[1]: var-lib-docker-metacopy\x2dcheck4082479782-merged.mount: Deactivated successfully. Oct 9 07:15:39.151996 dockerd[1713]: time="2024-10-09T07:15:39.151587659Z" level=info msg="Loading containers: start." Oct 9 07:15:39.351089 kernel: Initializing XFRM netlink socket Oct 9 07:15:39.503735 systemd-networkd[1373]: docker0: Link UP Oct 9 07:15:39.517321 dockerd[1713]: time="2024-10-09T07:15:39.517276675Z" level=info msg="Loading containers: done." Oct 9 07:15:39.610041 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3453447581-merged.mount: Deactivated successfully. Oct 9 07:15:39.614667 dockerd[1713]: time="2024-10-09T07:15:39.614146431Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 9 07:15:39.614667 dockerd[1713]: time="2024-10-09T07:15:39.614328358Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Oct 9 07:15:39.614667 dockerd[1713]: time="2024-10-09T07:15:39.614438837Z" level=info msg="Daemon has completed initialization" Oct 9 07:15:39.650088 dockerd[1713]: time="2024-10-09T07:15:39.649698535Z" level=info msg="API listen on /run/docker.sock" Oct 9 07:15:39.650116 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 9 07:15:40.672332 containerd[1457]: time="2024-10-09T07:15:40.672211042Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.0\"" Oct 9 07:15:41.443384 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1532634214.mount: Deactivated successfully. Oct 9 07:15:43.530767 containerd[1457]: time="2024-10-09T07:15:43.530678945Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:15:43.532359 containerd[1457]: time="2024-10-09T07:15:43.532278136Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.0: active requests=0, bytes read=28066629" Oct 9 07:15:43.533172 containerd[1457]: time="2024-10-09T07:15:43.533091294Z" level=info msg="ImageCreate event name:\"sha256:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:15:43.538059 containerd[1457]: time="2024-10-09T07:15:43.537832385Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:15:43.539261 containerd[1457]: time="2024-10-09T07:15:43.538671113Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.0\" with image id \"sha256:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.0\", repo digest \"registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf\", size \"28063421\" in 2.866371904s" Oct 9 07:15:43.539261 containerd[1457]: time="2024-10-09T07:15:43.538711004Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.0\" returns image reference \"sha256:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3\"" Oct 9 07:15:43.540801 containerd[1457]: time="2024-10-09T07:15:43.540723512Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.0\"" Oct 9 07:15:43.606872 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Oct 9 07:15:43.613212 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:15:43.757409 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:15:43.761817 (kubelet)[1903]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 07:15:44.273873 kubelet[1903]: E1009 07:15:44.273800 1903 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 07:15:44.277476 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 07:15:44.277780 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 07:15:47.430532 containerd[1457]: time="2024-10-09T07:15:47.430395610Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:15:47.451147 containerd[1457]: time="2024-10-09T07:15:47.450993845Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.0: active requests=0, bytes read=24690930" Oct 9 07:15:47.477179 containerd[1457]: time="2024-10-09T07:15:47.476974798Z" level=info msg="ImageCreate event name:\"sha256:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:15:47.487299 containerd[1457]: time="2024-10-09T07:15:47.487181611Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:15:47.490396 containerd[1457]: time="2024-10-09T07:15:47.490104249Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.0\" with image id \"sha256:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.0\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d\", size \"26240868\" in 3.949335849s" Oct 9 07:15:47.490396 containerd[1457]: time="2024-10-09T07:15:47.490193813Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.0\" returns image reference \"sha256:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1\"" Oct 9 07:15:47.492115 containerd[1457]: time="2024-10-09T07:15:47.491684236Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.0\"" Oct 9 07:15:49.232993 containerd[1457]: time="2024-10-09T07:15:49.232809187Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:15:49.234228 containerd[1457]: time="2024-10-09T07:15:49.233661365Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.0: active requests=0, bytes read=18646766" Oct 9 07:15:49.236236 containerd[1457]: time="2024-10-09T07:15:49.236101926Z" level=info msg="ImageCreate event name:\"sha256:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:15:49.240500 containerd[1457]: time="2024-10-09T07:15:49.240444228Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:15:49.242070 containerd[1457]: time="2024-10-09T07:15:49.241776200Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.0\" with image id \"sha256:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.0\", repo digest \"registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808\", size \"20196722\" in 1.75003104s" Oct 9 07:15:49.242070 containerd[1457]: time="2024-10-09T07:15:49.241812818Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.0\" returns image reference \"sha256:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94\"" Oct 9 07:15:49.242238 containerd[1457]: time="2024-10-09T07:15:49.242205315Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.0\"" Oct 9 07:15:50.679278 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3725127424.mount: Deactivated successfully. Oct 9 07:15:51.444316 containerd[1457]: time="2024-10-09T07:15:51.444137594Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:15:51.446519 containerd[1457]: time="2024-10-09T07:15:51.446306163Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.0: active requests=0, bytes read=30208889" Oct 9 07:15:51.447886 containerd[1457]: time="2024-10-09T07:15:51.447798504Z" level=info msg="ImageCreate event name:\"sha256:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:15:51.457730 containerd[1457]: time="2024-10-09T07:15:51.457525914Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:15:51.460810 containerd[1457]: time="2024-10-09T07:15:51.459416954Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.0\" with image id \"sha256:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494\", repo tag \"registry.k8s.io/kube-proxy:v1.31.0\", repo digest \"registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe\", size \"30207900\" in 2.217170901s" Oct 9 07:15:51.460810 containerd[1457]: time="2024-10-09T07:15:51.459498554Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.0\" returns image reference \"sha256:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494\"" Oct 9 07:15:51.460810 containerd[1457]: time="2024-10-09T07:15:51.460247051Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Oct 9 07:15:52.102205 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount774472984.mount: Deactivated successfully. Oct 9 07:15:52.584719 update_engine[1441]: I1009 07:15:52.584590 1441 update_attempter.cc:509] Updating boot flags... Oct 9 07:15:52.753273 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1954) Oct 9 07:15:54.323604 containerd[1457]: time="2024-10-09T07:15:54.323420090Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:15:54.326401 containerd[1457]: time="2024-10-09T07:15:54.326228097Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Oct 9 07:15:54.328356 containerd[1457]: time="2024-10-09T07:15:54.328283061Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:15:54.338296 containerd[1457]: time="2024-10-09T07:15:54.338132731Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:15:54.342841 containerd[1457]: time="2024-10-09T07:15:54.341228288Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.880918024s" Oct 9 07:15:54.342841 containerd[1457]: time="2024-10-09T07:15:54.341320129Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Oct 9 07:15:54.342841 containerd[1457]: time="2024-10-09T07:15:54.342392147Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Oct 9 07:15:54.357392 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Oct 9 07:15:54.370514 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:15:54.516692 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:15:54.520836 (kubelet)[1991]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 07:15:54.571549 kubelet[1991]: E1009 07:15:54.571444 1991 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 07:15:54.573865 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 07:15:54.574237 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 07:15:56.440655 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3285183209.mount: Deactivated successfully. Oct 9 07:15:56.450193 containerd[1457]: time="2024-10-09T07:15:56.449865468Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:15:56.452199 containerd[1457]: time="2024-10-09T07:15:56.452090535Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Oct 9 07:15:56.453874 containerd[1457]: time="2024-10-09T07:15:56.453766560Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:15:56.459494 containerd[1457]: time="2024-10-09T07:15:56.459391006Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:15:56.461870 containerd[1457]: time="2024-10-09T07:15:56.461622349Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 2.119168005s" Oct 9 07:15:56.461870 containerd[1457]: time="2024-10-09T07:15:56.461706756Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Oct 9 07:15:56.463535 containerd[1457]: time="2024-10-09T07:15:56.462940924Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Oct 9 07:15:57.532600 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount870617736.mount: Deactivated successfully. Oct 9 07:16:00.757596 containerd[1457]: time="2024-10-09T07:16:00.757297189Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:16:00.759790 containerd[1457]: time="2024-10-09T07:16:00.759746520Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56241748" Oct 9 07:16:00.763062 containerd[1457]: time="2024-10-09T07:16:00.761546793Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:16:00.765235 containerd[1457]: time="2024-10-09T07:16:00.765193540Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:16:00.766964 containerd[1457]: time="2024-10-09T07:16:00.766840302Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 4.303843889s" Oct 9 07:16:00.767144 containerd[1457]: time="2024-10-09T07:16:00.767102283Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Oct 9 07:16:04.607334 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Oct 9 07:16:04.619981 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:16:04.986238 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:16:04.992162 (kubelet)[2084]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 07:16:05.462908 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:16:05.467742 systemd[1]: kubelet.service: Deactivated successfully. Oct 9 07:16:05.468050 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:16:05.476351 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:16:05.518660 systemd[1]: Reloading requested from client PID 2098 ('systemctl') (unit session-11.scope)... Oct 9 07:16:05.518701 systemd[1]: Reloading... Oct 9 07:16:05.639061 zram_generator::config[2135]: No configuration found. Oct 9 07:16:05.847889 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 07:16:05.934621 systemd[1]: Reloading finished in 415 ms. Oct 9 07:16:05.983054 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 9 07:16:05.983141 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 9 07:16:05.983529 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:16:05.987393 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:16:07.211332 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:16:07.225802 (kubelet)[2199]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 9 07:16:07.313719 kubelet[2199]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 07:16:07.313719 kubelet[2199]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 9 07:16:07.313719 kubelet[2199]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 07:16:07.314392 kubelet[2199]: I1009 07:16:07.313766 2199 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 9 07:16:08.148076 kubelet[2199]: I1009 07:16:08.147915 2199 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Oct 9 07:16:08.149070 kubelet[2199]: I1009 07:16:08.148341 2199 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 9 07:16:08.149070 kubelet[2199]: I1009 07:16:08.148949 2199 server.go:929] "Client rotation is on, will bootstrap in background" Oct 9 07:16:08.180665 kubelet[2199]: I1009 07:16:08.180636 2199 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 9 07:16:08.183389 kubelet[2199]: E1009 07:16:08.183123 2199 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.24.4.220:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.24.4.220:6443: connect: connection refused" logger="UnhandledError" Oct 9 07:16:08.191454 kubelet[2199]: E1009 07:16:08.191391 2199 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Oct 9 07:16:08.191454 kubelet[2199]: I1009 07:16:08.191430 2199 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Oct 9 07:16:08.195633 kubelet[2199]: I1009 07:16:08.195583 2199 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 9 07:16:08.195821 kubelet[2199]: I1009 07:16:08.195674 2199 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Oct 9 07:16:08.195821 kubelet[2199]: I1009 07:16:08.195786 2199 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 9 07:16:08.196005 kubelet[2199]: I1009 07:16:08.195809 2199 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3975-2-2-4-dcc5873578.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 9 07:16:08.196005 kubelet[2199]: I1009 07:16:08.195998 2199 topology_manager.go:138] "Creating topology manager with none policy" Oct 9 07:16:08.196005 kubelet[2199]: I1009 07:16:08.196009 2199 container_manager_linux.go:300] "Creating device plugin manager" Oct 9 07:16:08.196375 kubelet[2199]: I1009 07:16:08.196140 2199 state_mem.go:36] "Initialized new in-memory state store" Oct 9 07:16:08.199248 kubelet[2199]: I1009 07:16:08.199058 2199 kubelet.go:408] "Attempting to sync node with API server" Oct 9 07:16:08.199248 kubelet[2199]: I1009 07:16:08.199082 2199 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 9 07:16:08.199248 kubelet[2199]: I1009 07:16:08.199114 2199 kubelet.go:314] "Adding apiserver pod source" Oct 9 07:16:08.199248 kubelet[2199]: I1009 07:16:08.199128 2199 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 9 07:16:08.208588 kubelet[2199]: W1009 07:16:08.208473 2199 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.220:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975-2-2-4-dcc5873578.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.220:6443: connect: connection refused Oct 9 07:16:08.208663 kubelet[2199]: E1009 07:16:08.208581 2199 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.24.4.220:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975-2-2-4-dcc5873578.novalocal&limit=500&resourceVersion=0\": dial tcp 172.24.4.220:6443: connect: connection refused" logger="UnhandledError" Oct 9 07:16:08.209384 kubelet[2199]: W1009 07:16:08.209308 2199 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.220:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.24.4.220:6443: connect: connection refused Oct 9 07:16:08.209449 kubelet[2199]: E1009 07:16:08.209403 2199 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.24.4.220:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.24.4.220:6443: connect: connection refused" logger="UnhandledError" Oct 9 07:16:08.209585 kubelet[2199]: I1009 07:16:08.209546 2199 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Oct 9 07:16:08.213687 kubelet[2199]: I1009 07:16:08.213270 2199 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 9 07:16:08.214790 kubelet[2199]: W1009 07:16:08.214761 2199 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 9 07:16:08.218980 kubelet[2199]: I1009 07:16:08.217642 2199 server.go:1269] "Started kubelet" Oct 9 07:16:08.219787 kubelet[2199]: I1009 07:16:08.219772 2199 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 9 07:16:08.233306 kubelet[2199]: I1009 07:16:08.233235 2199 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Oct 9 07:16:08.234360 kubelet[2199]: I1009 07:16:08.234283 2199 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 9 07:16:08.237919 kubelet[2199]: I1009 07:16:08.236445 2199 volume_manager.go:289] "Starting Kubelet Volume Manager" Oct 9 07:16:08.237919 kubelet[2199]: E1009 07:16:08.236711 2199 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3975-2-2-4-dcc5873578.novalocal\" not found" Oct 9 07:16:08.237919 kubelet[2199]: I1009 07:16:08.237392 2199 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 9 07:16:08.237919 kubelet[2199]: I1009 07:16:08.237471 2199 reconciler.go:26] "Reconciler: start to sync state" Oct 9 07:16:08.237919 kubelet[2199]: I1009 07:16:08.237491 2199 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 9 07:16:08.238285 kubelet[2199]: I1009 07:16:08.238087 2199 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 9 07:16:08.239083 kubelet[2199]: W1009 07:16:08.238965 2199 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.220:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.220:6443: connect: connection refused Oct 9 07:16:08.239201 kubelet[2199]: E1009 07:16:08.239055 2199 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.24.4.220:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.24.4.220:6443: connect: connection refused" logger="UnhandledError" Oct 9 07:16:08.239505 kubelet[2199]: E1009 07:16:08.239429 2199 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.220:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975-2-2-4-dcc5873578.novalocal?timeout=10s\": dial tcp 172.24.4.220:6443: connect: connection refused" interval="200ms" Oct 9 07:16:08.240059 kubelet[2199]: I1009 07:16:08.239993 2199 server.go:460] "Adding debug handlers to kubelet server" Oct 9 07:16:08.245138 kubelet[2199]: E1009 07:16:08.239827 2199 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.220:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.220:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3975-2-2-4-dcc5873578.novalocal.17fcb78bd5bbc824 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3975-2-2-4-dcc5873578.novalocal,UID:ci-3975-2-2-4-dcc5873578.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3975-2-2-4-dcc5873578.novalocal,},FirstTimestamp:2024-10-09 07:16:08.217618468 +0000 UTC m=+0.985066451,LastTimestamp:2024-10-09 07:16:08.217618468 +0000 UTC m=+0.985066451,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3975-2-2-4-dcc5873578.novalocal,}" Oct 9 07:16:08.245658 kubelet[2199]: I1009 07:16:08.245606 2199 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 9 07:16:08.248210 kubelet[2199]: I1009 07:16:08.248141 2199 factory.go:221] Registration of the containerd container factory successfully Oct 9 07:16:08.248210 kubelet[2199]: I1009 07:16:08.248195 2199 factory.go:221] Registration of the systemd container factory successfully Oct 9 07:16:08.269114 kubelet[2199]: I1009 07:16:08.268961 2199 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 9 07:16:08.271985 kubelet[2199]: I1009 07:16:08.271364 2199 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 9 07:16:08.271985 kubelet[2199]: I1009 07:16:08.271431 2199 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 9 07:16:08.271985 kubelet[2199]: I1009 07:16:08.271500 2199 kubelet.go:2321] "Starting kubelet main sync loop" Oct 9 07:16:08.271985 kubelet[2199]: E1009 07:16:08.271585 2199 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 9 07:16:08.280832 kubelet[2199]: W1009 07:16:08.280792 2199 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.220:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.220:6443: connect: connection refused Oct 9 07:16:08.281403 kubelet[2199]: E1009 07:16:08.281380 2199 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.24.4.220:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.24.4.220:6443: connect: connection refused" logger="UnhandledError" Oct 9 07:16:08.290643 kubelet[2199]: I1009 07:16:08.290628 2199 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 9 07:16:08.290734 kubelet[2199]: I1009 07:16:08.290725 2199 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 9 07:16:08.290793 kubelet[2199]: I1009 07:16:08.290785 2199 state_mem.go:36] "Initialized new in-memory state store" Oct 9 07:16:08.295416 kubelet[2199]: I1009 07:16:08.295383 2199 policy_none.go:49] "None policy: Start" Oct 9 07:16:08.297122 kubelet[2199]: I1009 07:16:08.297008 2199 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 9 07:16:08.297122 kubelet[2199]: I1009 07:16:08.297088 2199 state_mem.go:35] "Initializing new in-memory state store" Oct 9 07:16:08.313676 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 9 07:16:08.331927 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 9 07:16:08.337262 kubelet[2199]: E1009 07:16:08.337241 2199 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3975-2-2-4-dcc5873578.novalocal\" not found" Oct 9 07:16:08.343253 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 9 07:16:08.353163 kubelet[2199]: I1009 07:16:08.353060 2199 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 9 07:16:08.353457 kubelet[2199]: I1009 07:16:08.353419 2199 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 9 07:16:08.354136 kubelet[2199]: I1009 07:16:08.353459 2199 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 9 07:16:08.354928 kubelet[2199]: I1009 07:16:08.354450 2199 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 9 07:16:08.357052 kubelet[2199]: E1009 07:16:08.356997 2199 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3975-2-2-4-dcc5873578.novalocal\" not found" Oct 9 07:16:08.395254 systemd[1]: Created slice kubepods-burstable-pod30088c6878d652951f5d02df6c5cad3c.slice - libcontainer container kubepods-burstable-pod30088c6878d652951f5d02df6c5cad3c.slice. Oct 9 07:16:08.432190 systemd[1]: Created slice kubepods-burstable-pod1ef848a743e194a1f788dcc140639667.slice - libcontainer container kubepods-burstable-pod1ef848a743e194a1f788dcc140639667.slice. Oct 9 07:16:08.440669 kubelet[2199]: E1009 07:16:08.440548 2199 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.220:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975-2-2-4-dcc5873578.novalocal?timeout=10s\": dial tcp 172.24.4.220:6443: connect: connection refused" interval="400ms" Oct 9 07:16:08.450523 systemd[1]: Created slice kubepods-burstable-poddb2ebbf125ad6041537fe9d8c363477c.slice - libcontainer container kubepods-burstable-poddb2ebbf125ad6041537fe9d8c363477c.slice. Oct 9 07:16:08.458364 kubelet[2199]: I1009 07:16:08.458275 2199 kubelet_node_status.go:72] "Attempting to register node" node="ci-3975-2-2-4-dcc5873578.novalocal" Oct 9 07:16:08.459470 kubelet[2199]: E1009 07:16:08.459402 2199 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.220:6443/api/v1/nodes\": dial tcp 172.24.4.220:6443: connect: connection refused" node="ci-3975-2-2-4-dcc5873578.novalocal" Oct 9 07:16:08.541825 kubelet[2199]: I1009 07:16:08.541734 2199 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/30088c6878d652951f5d02df6c5cad3c-ca-certs\") pod \"kube-apiserver-ci-3975-2-2-4-dcc5873578.novalocal\" (UID: \"30088c6878d652951f5d02df6c5cad3c\") " pod="kube-system/kube-apiserver-ci-3975-2-2-4-dcc5873578.novalocal" Oct 9 07:16:08.542315 kubelet[2199]: I1009 07:16:08.541860 2199 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/30088c6878d652951f5d02df6c5cad3c-k8s-certs\") pod \"kube-apiserver-ci-3975-2-2-4-dcc5873578.novalocal\" (UID: \"30088c6878d652951f5d02df6c5cad3c\") " pod="kube-system/kube-apiserver-ci-3975-2-2-4-dcc5873578.novalocal" Oct 9 07:16:08.542315 kubelet[2199]: I1009 07:16:08.541915 2199 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/db2ebbf125ad6041537fe9d8c363477c-ca-certs\") pod \"kube-controller-manager-ci-3975-2-2-4-dcc5873578.novalocal\" (UID: \"db2ebbf125ad6041537fe9d8c363477c\") " pod="kube-system/kube-controller-manager-ci-3975-2-2-4-dcc5873578.novalocal" Oct 9 07:16:08.542315 kubelet[2199]: I1009 07:16:08.541961 2199 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/db2ebbf125ad6041537fe9d8c363477c-kubeconfig\") pod \"kube-controller-manager-ci-3975-2-2-4-dcc5873578.novalocal\" (UID: \"db2ebbf125ad6041537fe9d8c363477c\") " pod="kube-system/kube-controller-manager-ci-3975-2-2-4-dcc5873578.novalocal" Oct 9 07:16:08.542315 kubelet[2199]: I1009 07:16:08.542010 2199 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/db2ebbf125ad6041537fe9d8c363477c-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3975-2-2-4-dcc5873578.novalocal\" (UID: \"db2ebbf125ad6041537fe9d8c363477c\") " pod="kube-system/kube-controller-manager-ci-3975-2-2-4-dcc5873578.novalocal" Oct 9 07:16:08.542622 kubelet[2199]: I1009 07:16:08.542097 2199 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1ef848a743e194a1f788dcc140639667-kubeconfig\") pod \"kube-scheduler-ci-3975-2-2-4-dcc5873578.novalocal\" (UID: \"1ef848a743e194a1f788dcc140639667\") " pod="kube-system/kube-scheduler-ci-3975-2-2-4-dcc5873578.novalocal" Oct 9 07:16:08.542622 kubelet[2199]: I1009 07:16:08.542141 2199 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/30088c6878d652951f5d02df6c5cad3c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3975-2-2-4-dcc5873578.novalocal\" (UID: \"30088c6878d652951f5d02df6c5cad3c\") " pod="kube-system/kube-apiserver-ci-3975-2-2-4-dcc5873578.novalocal" Oct 9 07:16:08.542622 kubelet[2199]: I1009 07:16:08.542192 2199 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/db2ebbf125ad6041537fe9d8c363477c-flexvolume-dir\") pod \"kube-controller-manager-ci-3975-2-2-4-dcc5873578.novalocal\" (UID: \"db2ebbf125ad6041537fe9d8c363477c\") " pod="kube-system/kube-controller-manager-ci-3975-2-2-4-dcc5873578.novalocal" Oct 9 07:16:08.542622 kubelet[2199]: I1009 07:16:08.542281 2199 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/db2ebbf125ad6041537fe9d8c363477c-k8s-certs\") pod \"kube-controller-manager-ci-3975-2-2-4-dcc5873578.novalocal\" (UID: \"db2ebbf125ad6041537fe9d8c363477c\") " pod="kube-system/kube-controller-manager-ci-3975-2-2-4-dcc5873578.novalocal" Oct 9 07:16:08.664205 kubelet[2199]: I1009 07:16:08.663782 2199 kubelet_node_status.go:72] "Attempting to register node" node="ci-3975-2-2-4-dcc5873578.novalocal" Oct 9 07:16:08.664638 kubelet[2199]: E1009 07:16:08.664522 2199 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.220:6443/api/v1/nodes\": dial tcp 172.24.4.220:6443: connect: connection refused" node="ci-3975-2-2-4-dcc5873578.novalocal" Oct 9 07:16:08.724727 containerd[1457]: time="2024-10-09T07:16:08.724336839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3975-2-2-4-dcc5873578.novalocal,Uid:30088c6878d652951f5d02df6c5cad3c,Namespace:kube-system,Attempt:0,}" Oct 9 07:16:08.752136 containerd[1457]: time="2024-10-09T07:16:08.751546626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3975-2-2-4-dcc5873578.novalocal,Uid:1ef848a743e194a1f788dcc140639667,Namespace:kube-system,Attempt:0,}" Oct 9 07:16:08.758013 containerd[1457]: time="2024-10-09T07:16:08.757914462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3975-2-2-4-dcc5873578.novalocal,Uid:db2ebbf125ad6041537fe9d8c363477c,Namespace:kube-system,Attempt:0,}" Oct 9 07:16:08.842295 kubelet[2199]: E1009 07:16:08.842126 2199 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.220:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975-2-2-4-dcc5873578.novalocal?timeout=10s\": dial tcp 172.24.4.220:6443: connect: connection refused" interval="800ms" Oct 9 07:16:09.070792 kubelet[2199]: I1009 07:16:09.070570 2199 kubelet_node_status.go:72] "Attempting to register node" node="ci-3975-2-2-4-dcc5873578.novalocal" Oct 9 07:16:09.073053 kubelet[2199]: E1009 07:16:09.072216 2199 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.220:6443/api/v1/nodes\": dial tcp 172.24.4.220:6443: connect: connection refused" node="ci-3975-2-2-4-dcc5873578.novalocal" Oct 9 07:16:09.128892 kubelet[2199]: W1009 07:16:09.128745 2199 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.220:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.220:6443: connect: connection refused Oct 9 07:16:09.129161 kubelet[2199]: E1009 07:16:09.128899 2199 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.24.4.220:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.24.4.220:6443: connect: connection refused" logger="UnhandledError" Oct 9 07:16:09.509153 kubelet[2199]: W1009 07:16:09.508935 2199 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.220:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.220:6443: connect: connection refused Oct 9 07:16:09.512491 kubelet[2199]: E1009 07:16:09.509163 2199 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.24.4.220:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.24.4.220:6443: connect: connection refused" logger="UnhandledError" Oct 9 07:16:09.572893 kubelet[2199]: W1009 07:16:09.572783 2199 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.220:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975-2-2-4-dcc5873578.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.220:6443: connect: connection refused Oct 9 07:16:09.573147 kubelet[2199]: E1009 07:16:09.572906 2199 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.24.4.220:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975-2-2-4-dcc5873578.novalocal&limit=500&resourceVersion=0\": dial tcp 172.24.4.220:6443: connect: connection refused" logger="UnhandledError" Oct 9 07:16:09.625384 kubelet[2199]: W1009 07:16:09.625257 2199 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.220:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.24.4.220:6443: connect: connection refused Oct 9 07:16:09.625590 kubelet[2199]: E1009 07:16:09.625381 2199 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.24.4.220:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.24.4.220:6443: connect: connection refused" logger="UnhandledError" Oct 9 07:16:09.643244 kubelet[2199]: E1009 07:16:09.643163 2199 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.220:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975-2-2-4-dcc5873578.novalocal?timeout=10s\": dial tcp 172.24.4.220:6443: connect: connection refused" interval="1.6s" Oct 9 07:16:09.875333 kubelet[2199]: I1009 07:16:09.875243 2199 kubelet_node_status.go:72] "Attempting to register node" node="ci-3975-2-2-4-dcc5873578.novalocal" Oct 9 07:16:09.876007 kubelet[2199]: E1009 07:16:09.875942 2199 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.220:6443/api/v1/nodes\": dial tcp 172.24.4.220:6443: connect: connection refused" node="ci-3975-2-2-4-dcc5873578.novalocal" Oct 9 07:16:10.143448 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3322942579.mount: Deactivated successfully. Oct 9 07:16:10.160727 containerd[1457]: time="2024-10-09T07:16:10.160628463Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 07:16:10.164251 containerd[1457]: time="2024-10-09T07:16:10.163954242Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 9 07:16:10.166109 containerd[1457]: time="2024-10-09T07:16:10.165805435Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 07:16:10.168332 containerd[1457]: time="2024-10-09T07:16:10.168247774Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 07:16:10.174390 containerd[1457]: time="2024-10-09T07:16:10.174121887Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Oct 9 07:16:10.178102 containerd[1457]: time="2024-10-09T07:16:10.178050029Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 9 07:16:10.179087 containerd[1457]: time="2024-10-09T07:16:10.178423317Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 07:16:10.191832 containerd[1457]: time="2024-10-09T07:16:10.191768449Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 07:16:10.195745 containerd[1457]: time="2024-10-09T07:16:10.195684934Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.443962898s" Oct 9 07:16:10.201398 containerd[1457]: time="2024-10-09T07:16:10.201301125Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.476743457s" Oct 9 07:16:10.203277 containerd[1457]: time="2024-10-09T07:16:10.203217757Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.445080291s" Oct 9 07:16:10.272990 kubelet[2199]: E1009 07:16:10.272911 2199 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.24.4.220:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.24.4.220:6443: connect: connection refused" logger="UnhandledError" Oct 9 07:16:10.495310 containerd[1457]: time="2024-10-09T07:16:10.491110784Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:16:10.495310 containerd[1457]: time="2024-10-09T07:16:10.491248471Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:16:10.495310 containerd[1457]: time="2024-10-09T07:16:10.491325558Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:16:10.495310 containerd[1457]: time="2024-10-09T07:16:10.491371775Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:16:10.501620 containerd[1457]: time="2024-10-09T07:16:10.501212557Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:16:10.501620 containerd[1457]: time="2024-10-09T07:16:10.501324856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:16:10.501620 containerd[1457]: time="2024-10-09T07:16:10.501367544Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:16:10.501620 containerd[1457]: time="2024-10-09T07:16:10.501400610Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:16:10.503464 containerd[1457]: time="2024-10-09T07:16:10.503293708Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:16:10.503464 containerd[1457]: time="2024-10-09T07:16:10.503367247Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:16:10.503464 containerd[1457]: time="2024-10-09T07:16:10.503396033Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:16:10.503464 containerd[1457]: time="2024-10-09T07:16:10.503416410Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:16:10.528245 systemd[1]: Started cri-containerd-08b33595668e0210b0aa8b3dd769d7538e9c33ad58a729c715936c53bd52ca68.scope - libcontainer container 08b33595668e0210b0aa8b3dd769d7538e9c33ad58a729c715936c53bd52ca68. Oct 9 07:16:10.542184 systemd[1]: Started cri-containerd-5fee9828f2d96b398fc2f32880cfb9d0e8cb52b0ea6b871f27d9c84c752c7587.scope - libcontainer container 5fee9828f2d96b398fc2f32880cfb9d0e8cb52b0ea6b871f27d9c84c752c7587. Oct 9 07:16:10.556229 systemd[1]: Started cri-containerd-e7231e6b40445b3fb4ea2df300628399206a7325e711cc827a1ae839daf24831.scope - libcontainer container e7231e6b40445b3fb4ea2df300628399206a7325e711cc827a1ae839daf24831. Oct 9 07:16:10.618621 containerd[1457]: time="2024-10-09T07:16:10.618431647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3975-2-2-4-dcc5873578.novalocal,Uid:db2ebbf125ad6041537fe9d8c363477c,Namespace:kube-system,Attempt:0,} returns sandbox id \"08b33595668e0210b0aa8b3dd769d7538e9c33ad58a729c715936c53bd52ca68\"" Oct 9 07:16:10.624965 containerd[1457]: time="2024-10-09T07:16:10.624752926Z" level=info msg="CreateContainer within sandbox \"08b33595668e0210b0aa8b3dd769d7538e9c33ad58a729c715936c53bd52ca68\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 9 07:16:10.636682 containerd[1457]: time="2024-10-09T07:16:10.636329426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3975-2-2-4-dcc5873578.novalocal,Uid:30088c6878d652951f5d02df6c5cad3c,Namespace:kube-system,Attempt:0,} returns sandbox id \"e7231e6b40445b3fb4ea2df300628399206a7325e711cc827a1ae839daf24831\"" Oct 9 07:16:10.641620 containerd[1457]: time="2024-10-09T07:16:10.641271817Z" level=info msg="CreateContainer within sandbox \"e7231e6b40445b3fb4ea2df300628399206a7325e711cc827a1ae839daf24831\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 9 07:16:10.644007 containerd[1457]: time="2024-10-09T07:16:10.643976049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3975-2-2-4-dcc5873578.novalocal,Uid:1ef848a743e194a1f788dcc140639667,Namespace:kube-system,Attempt:0,} returns sandbox id \"5fee9828f2d96b398fc2f32880cfb9d0e8cb52b0ea6b871f27d9c84c752c7587\"" Oct 9 07:16:10.647404 containerd[1457]: time="2024-10-09T07:16:10.647373603Z" level=info msg="CreateContainer within sandbox \"5fee9828f2d96b398fc2f32880cfb9d0e8cb52b0ea6b871f27d9c84c752c7587\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 9 07:16:10.685938 containerd[1457]: time="2024-10-09T07:16:10.685769143Z" level=info msg="CreateContainer within sandbox \"08b33595668e0210b0aa8b3dd769d7538e9c33ad58a729c715936c53bd52ca68\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"82132b8d5ff8d95ec0218830237cf8a306cc41c5c6287809c20c73780510a82b\"" Oct 9 07:16:10.687211 containerd[1457]: time="2024-10-09T07:16:10.687000832Z" level=info msg="StartContainer for \"82132b8d5ff8d95ec0218830237cf8a306cc41c5c6287809c20c73780510a82b\"" Oct 9 07:16:10.707456 containerd[1457]: time="2024-10-09T07:16:10.707250022Z" level=info msg="CreateContainer within sandbox \"5fee9828f2d96b398fc2f32880cfb9d0e8cb52b0ea6b871f27d9c84c752c7587\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"fc6ae8219959c07cd5d7deae11eb21f75245c82387040e965a13a29d00decef5\"" Oct 9 07:16:10.709103 containerd[1457]: time="2024-10-09T07:16:10.708539303Z" level=info msg="StartContainer for \"fc6ae8219959c07cd5d7deae11eb21f75245c82387040e965a13a29d00decef5\"" Oct 9 07:16:10.710696 containerd[1457]: time="2024-10-09T07:16:10.710629154Z" level=info msg="CreateContainer within sandbox \"e7231e6b40445b3fb4ea2df300628399206a7325e711cc827a1ae839daf24831\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"57a85872a0c7fee08ecec4d5894f93f896ce39781b6d84c81e9da95a5b7e8e6d\"" Oct 9 07:16:10.711719 containerd[1457]: time="2024-10-09T07:16:10.711675407Z" level=info msg="StartContainer for \"57a85872a0c7fee08ecec4d5894f93f896ce39781b6d84c81e9da95a5b7e8e6d\"" Oct 9 07:16:10.731364 systemd[1]: Started cri-containerd-82132b8d5ff8d95ec0218830237cf8a306cc41c5c6287809c20c73780510a82b.scope - libcontainer container 82132b8d5ff8d95ec0218830237cf8a306cc41c5c6287809c20c73780510a82b. Oct 9 07:16:10.762515 systemd[1]: Started cri-containerd-fc6ae8219959c07cd5d7deae11eb21f75245c82387040e965a13a29d00decef5.scope - libcontainer container fc6ae8219959c07cd5d7deae11eb21f75245c82387040e965a13a29d00decef5. Oct 9 07:16:10.776217 systemd[1]: Started cri-containerd-57a85872a0c7fee08ecec4d5894f93f896ce39781b6d84c81e9da95a5b7e8e6d.scope - libcontainer container 57a85872a0c7fee08ecec4d5894f93f896ce39781b6d84c81e9da95a5b7e8e6d. Oct 9 07:16:10.835987 containerd[1457]: time="2024-10-09T07:16:10.835310844Z" level=info msg="StartContainer for \"82132b8d5ff8d95ec0218830237cf8a306cc41c5c6287809c20c73780510a82b\" returns successfully" Oct 9 07:16:10.860064 containerd[1457]: time="2024-10-09T07:16:10.859804864Z" level=info msg="StartContainer for \"fc6ae8219959c07cd5d7deae11eb21f75245c82387040e965a13a29d00decef5\" returns successfully" Oct 9 07:16:10.860500 containerd[1457]: time="2024-10-09T07:16:10.860003180Z" level=info msg="StartContainer for \"57a85872a0c7fee08ecec4d5894f93f896ce39781b6d84c81e9da95a5b7e8e6d\" returns successfully" Oct 9 07:16:11.479032 kubelet[2199]: I1009 07:16:11.478618 2199 kubelet_node_status.go:72] "Attempting to register node" node="ci-3975-2-2-4-dcc5873578.novalocal" Oct 9 07:16:13.049499 kubelet[2199]: E1009 07:16:13.049429 2199 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3975-2-2-4-dcc5873578.novalocal\" not found" node="ci-3975-2-2-4-dcc5873578.novalocal" Oct 9 07:16:13.213484 kubelet[2199]: I1009 07:16:13.213171 2199 apiserver.go:52] "Watching apiserver" Oct 9 07:16:13.225709 kubelet[2199]: I1009 07:16:13.225534 2199 kubelet_node_status.go:75] "Successfully registered node" node="ci-3975-2-2-4-dcc5873578.novalocal" Oct 9 07:16:13.225709 kubelet[2199]: E1009 07:16:13.225571 2199 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-3975-2-2-4-dcc5873578.novalocal\": node \"ci-3975-2-2-4-dcc5873578.novalocal\" not found" Oct 9 07:16:13.238508 kubelet[2199]: I1009 07:16:13.238434 2199 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 9 07:16:13.836009 kubelet[2199]: E1009 07:16:13.835911 2199 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3975-2-2-4-dcc5873578.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-3975-2-2-4-dcc5873578.novalocal" Oct 9 07:16:14.497391 kubelet[2199]: W1009 07:16:14.497302 2199 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 9 07:16:16.166007 systemd[1]: Reloading requested from client PID 2466 ('systemctl') (unit session-11.scope)... Oct 9 07:16:16.166101 systemd[1]: Reloading... Oct 9 07:16:16.303070 zram_generator::config[2506]: No configuration found. Oct 9 07:16:16.454710 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 07:16:16.564224 systemd[1]: Reloading finished in 397 ms. Oct 9 07:16:16.610965 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:16:16.620179 systemd[1]: kubelet.service: Deactivated successfully. Oct 9 07:16:16.620475 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:16:16.620536 systemd[1]: kubelet.service: Consumed 1.682s CPU time, 114.3M memory peak, 0B memory swap peak. Oct 9 07:16:16.624344 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:16:17.011277 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:16:17.028650 (kubelet)[2567]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 9 07:16:17.308907 kubelet[2567]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 07:16:17.310791 kubelet[2567]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 9 07:16:17.310791 kubelet[2567]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 07:16:17.310791 kubelet[2567]: I1009 07:16:17.309286 2567 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 9 07:16:17.326318 kubelet[2567]: I1009 07:16:17.326282 2567 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Oct 9 07:16:17.326557 kubelet[2567]: I1009 07:16:17.326547 2567 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 9 07:16:17.327456 kubelet[2567]: I1009 07:16:17.327343 2567 server.go:929] "Client rotation is on, will bootstrap in background" Oct 9 07:16:17.329945 kubelet[2567]: I1009 07:16:17.329929 2567 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 9 07:16:17.336721 kubelet[2567]: I1009 07:16:17.336651 2567 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 9 07:16:17.345714 kubelet[2567]: E1009 07:16:17.345659 2567 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Oct 9 07:16:17.346011 kubelet[2567]: I1009 07:16:17.345888 2567 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Oct 9 07:16:17.351507 kubelet[2567]: I1009 07:16:17.351451 2567 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 9 07:16:17.352128 kubelet[2567]: I1009 07:16:17.351752 2567 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Oct 9 07:16:17.352128 kubelet[2567]: I1009 07:16:17.351891 2567 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 9 07:16:17.352363 kubelet[2567]: I1009 07:16:17.351929 2567 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3975-2-2-4-dcc5873578.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 9 07:16:17.352582 kubelet[2567]: I1009 07:16:17.352540 2567 topology_manager.go:138] "Creating topology manager with none policy" Oct 9 07:16:17.352674 kubelet[2567]: I1009 07:16:17.352664 2567 container_manager_linux.go:300] "Creating device plugin manager" Oct 9 07:16:17.354357 kubelet[2567]: I1009 07:16:17.353919 2567 state_mem.go:36] "Initialized new in-memory state store" Oct 9 07:16:17.355107 kubelet[2567]: I1009 07:16:17.354716 2567 kubelet.go:408] "Attempting to sync node with API server" Oct 9 07:16:17.355107 kubelet[2567]: I1009 07:16:17.354739 2567 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 9 07:16:17.355107 kubelet[2567]: I1009 07:16:17.354803 2567 kubelet.go:314] "Adding apiserver pod source" Oct 9 07:16:17.355107 kubelet[2567]: I1009 07:16:17.354827 2567 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 9 07:16:17.358992 kubelet[2567]: I1009 07:16:17.357684 2567 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Oct 9 07:16:17.368172 kubelet[2567]: I1009 07:16:17.368072 2567 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 9 07:16:17.371194 kubelet[2567]: I1009 07:16:17.370467 2567 server.go:1269] "Started kubelet" Oct 9 07:16:17.386467 kubelet[2567]: I1009 07:16:17.386434 2567 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 9 07:16:17.401007 kubelet[2567]: I1009 07:16:17.400847 2567 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Oct 9 07:16:17.402488 kubelet[2567]: I1009 07:16:17.402230 2567 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 9 07:16:17.414097 kubelet[2567]: I1009 07:16:17.413653 2567 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 9 07:16:17.414097 kubelet[2567]: I1009 07:16:17.404150 2567 volume_manager.go:289] "Starting Kubelet Volume Manager" Oct 9 07:16:17.414097 kubelet[2567]: I1009 07:16:17.403584 2567 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 9 07:16:17.414097 kubelet[2567]: I1009 07:16:17.404163 2567 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 9 07:16:17.414097 kubelet[2567]: I1009 07:16:17.413972 2567 reconciler.go:26] "Reconciler: start to sync state" Oct 9 07:16:17.414097 kubelet[2567]: E1009 07:16:17.405764 2567 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3975-2-2-4-dcc5873578.novalocal\" not found" Oct 9 07:16:17.419645 kubelet[2567]: I1009 07:16:17.409626 2567 server.go:460] "Adding debug handlers to kubelet server" Oct 9 07:16:17.424552 kubelet[2567]: I1009 07:16:17.424511 2567 factory.go:221] Registration of the systemd container factory successfully Oct 9 07:16:17.424704 kubelet[2567]: I1009 07:16:17.424643 2567 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 9 07:16:17.437212 kubelet[2567]: I1009 07:16:17.437039 2567 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 9 07:16:17.440313 kubelet[2567]: I1009 07:16:17.439845 2567 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 9 07:16:17.440313 kubelet[2567]: I1009 07:16:17.439879 2567 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 9 07:16:17.440313 kubelet[2567]: I1009 07:16:17.439904 2567 kubelet.go:2321] "Starting kubelet main sync loop" Oct 9 07:16:17.440313 kubelet[2567]: E1009 07:16:17.439962 2567 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 9 07:16:17.442324 kubelet[2567]: I1009 07:16:17.442177 2567 factory.go:221] Registration of the containerd container factory successfully Oct 9 07:16:17.540413 kubelet[2567]: E1009 07:16:17.540355 2567 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 9 07:16:17.546556 kubelet[2567]: I1009 07:16:17.546529 2567 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 9 07:16:17.546556 kubelet[2567]: I1009 07:16:17.546552 2567 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 9 07:16:17.546670 kubelet[2567]: I1009 07:16:17.546572 2567 state_mem.go:36] "Initialized new in-memory state store" Oct 9 07:16:17.546749 kubelet[2567]: I1009 07:16:17.546728 2567 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 9 07:16:17.546799 kubelet[2567]: I1009 07:16:17.546748 2567 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 9 07:16:17.546799 kubelet[2567]: I1009 07:16:17.546778 2567 policy_none.go:49] "None policy: Start" Oct 9 07:16:17.547535 kubelet[2567]: I1009 07:16:17.547494 2567 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 9 07:16:17.547685 kubelet[2567]: I1009 07:16:17.547614 2567 state_mem.go:35] "Initializing new in-memory state store" Oct 9 07:16:17.548001 kubelet[2567]: I1009 07:16:17.547973 2567 state_mem.go:75] "Updated machine memory state" Oct 9 07:16:17.554795 kubelet[2567]: I1009 07:16:17.554763 2567 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 9 07:16:17.554952 kubelet[2567]: I1009 07:16:17.554933 2567 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 9 07:16:17.555058 kubelet[2567]: I1009 07:16:17.554952 2567 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 9 07:16:17.555506 kubelet[2567]: I1009 07:16:17.555487 2567 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 9 07:16:17.674453 kubelet[2567]: I1009 07:16:17.674100 2567 kubelet_node_status.go:72] "Attempting to register node" node="ci-3975-2-2-4-dcc5873578.novalocal" Oct 9 07:16:17.696858 kubelet[2567]: I1009 07:16:17.696171 2567 kubelet_node_status.go:111] "Node was previously registered" node="ci-3975-2-2-4-dcc5873578.novalocal" Oct 9 07:16:17.696858 kubelet[2567]: I1009 07:16:17.696343 2567 kubelet_node_status.go:75] "Successfully registered node" node="ci-3975-2-2-4-dcc5873578.novalocal" Oct 9 07:16:17.759097 kubelet[2567]: W1009 07:16:17.757974 2567 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 9 07:16:17.759097 kubelet[2567]: E1009 07:16:17.758074 2567 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3975-2-2-4-dcc5873578.novalocal\" already exists" pod="kube-system/kube-apiserver-ci-3975-2-2-4-dcc5873578.novalocal" Oct 9 07:16:17.759097 kubelet[2567]: W1009 07:16:17.758794 2567 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 9 07:16:17.759507 kubelet[2567]: W1009 07:16:17.759231 2567 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 9 07:16:17.818812 kubelet[2567]: I1009 07:16:17.818755 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/db2ebbf125ad6041537fe9d8c363477c-flexvolume-dir\") pod \"kube-controller-manager-ci-3975-2-2-4-dcc5873578.novalocal\" (UID: \"db2ebbf125ad6041537fe9d8c363477c\") " pod="kube-system/kube-controller-manager-ci-3975-2-2-4-dcc5873578.novalocal" Oct 9 07:16:17.818812 kubelet[2567]: I1009 07:16:17.818804 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1ef848a743e194a1f788dcc140639667-kubeconfig\") pod \"kube-scheduler-ci-3975-2-2-4-dcc5873578.novalocal\" (UID: \"1ef848a743e194a1f788dcc140639667\") " pod="kube-system/kube-scheduler-ci-3975-2-2-4-dcc5873578.novalocal" Oct 9 07:16:17.818812 kubelet[2567]: I1009 07:16:17.818826 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/30088c6878d652951f5d02df6c5cad3c-ca-certs\") pod \"kube-apiserver-ci-3975-2-2-4-dcc5873578.novalocal\" (UID: \"30088c6878d652951f5d02df6c5cad3c\") " pod="kube-system/kube-apiserver-ci-3975-2-2-4-dcc5873578.novalocal" Oct 9 07:16:17.818812 kubelet[2567]: I1009 07:16:17.818846 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/30088c6878d652951f5d02df6c5cad3c-k8s-certs\") pod \"kube-apiserver-ci-3975-2-2-4-dcc5873578.novalocal\" (UID: \"30088c6878d652951f5d02df6c5cad3c\") " pod="kube-system/kube-apiserver-ci-3975-2-2-4-dcc5873578.novalocal" Oct 9 07:16:17.819331 kubelet[2567]: I1009 07:16:17.818867 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/30088c6878d652951f5d02df6c5cad3c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3975-2-2-4-dcc5873578.novalocal\" (UID: \"30088c6878d652951f5d02df6c5cad3c\") " pod="kube-system/kube-apiserver-ci-3975-2-2-4-dcc5873578.novalocal" Oct 9 07:16:17.819331 kubelet[2567]: I1009 07:16:17.818887 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/db2ebbf125ad6041537fe9d8c363477c-ca-certs\") pod \"kube-controller-manager-ci-3975-2-2-4-dcc5873578.novalocal\" (UID: \"db2ebbf125ad6041537fe9d8c363477c\") " pod="kube-system/kube-controller-manager-ci-3975-2-2-4-dcc5873578.novalocal" Oct 9 07:16:17.819331 kubelet[2567]: I1009 07:16:17.818906 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/db2ebbf125ad6041537fe9d8c363477c-kubeconfig\") pod \"kube-controller-manager-ci-3975-2-2-4-dcc5873578.novalocal\" (UID: \"db2ebbf125ad6041537fe9d8c363477c\") " pod="kube-system/kube-controller-manager-ci-3975-2-2-4-dcc5873578.novalocal" Oct 9 07:16:17.819331 kubelet[2567]: I1009 07:16:17.818927 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/db2ebbf125ad6041537fe9d8c363477c-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3975-2-2-4-dcc5873578.novalocal\" (UID: \"db2ebbf125ad6041537fe9d8c363477c\") " pod="kube-system/kube-controller-manager-ci-3975-2-2-4-dcc5873578.novalocal" Oct 9 07:16:17.819638 kubelet[2567]: I1009 07:16:17.818948 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/db2ebbf125ad6041537fe9d8c363477c-k8s-certs\") pod \"kube-controller-manager-ci-3975-2-2-4-dcc5873578.novalocal\" (UID: \"db2ebbf125ad6041537fe9d8c363477c\") " pod="kube-system/kube-controller-manager-ci-3975-2-2-4-dcc5873578.novalocal" Oct 9 07:16:18.358053 kubelet[2567]: I1009 07:16:18.357555 2567 apiserver.go:52] "Watching apiserver" Oct 9 07:16:18.414296 kubelet[2567]: I1009 07:16:18.414179 2567 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 9 07:16:18.543789 kubelet[2567]: W1009 07:16:18.543729 2567 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 9 07:16:18.544056 kubelet[2567]: E1009 07:16:18.543818 2567 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3975-2-2-4-dcc5873578.novalocal\" already exists" pod="kube-system/kube-apiserver-ci-3975-2-2-4-dcc5873578.novalocal" Oct 9 07:16:18.569894 kubelet[2567]: I1009 07:16:18.569824 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3975-2-2-4-dcc5873578.novalocal" podStartSLOduration=1.569807427 podStartE2EDuration="1.569807427s" podCreationTimestamp="2024-10-09 07:16:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 07:16:18.569579281 +0000 UTC m=+1.366144731" watchObservedRunningTime="2024-10-09 07:16:18.569807427 +0000 UTC m=+1.366372877" Oct 9 07:16:18.586200 kubelet[2567]: I1009 07:16:18.586125 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3975-2-2-4-dcc5873578.novalocal" podStartSLOduration=1.586105211 podStartE2EDuration="1.586105211s" podCreationTimestamp="2024-10-09 07:16:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 07:16:18.585274074 +0000 UTC m=+1.381839524" watchObservedRunningTime="2024-10-09 07:16:18.586105211 +0000 UTC m=+1.382670651" Oct 9 07:16:18.611295 kubelet[2567]: I1009 07:16:18.611120 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3975-2-2-4-dcc5873578.novalocal" podStartSLOduration=4.61109771 podStartE2EDuration="4.61109771s" podCreationTimestamp="2024-10-09 07:16:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 07:16:18.600042807 +0000 UTC m=+1.396608247" watchObservedRunningTime="2024-10-09 07:16:18.61109771 +0000 UTC m=+1.407663150" Oct 9 07:16:22.153727 kubelet[2567]: I1009 07:16:22.153680 2567 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 9 07:16:22.154861 containerd[1457]: time="2024-10-09T07:16:22.154695596Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 9 07:16:22.156122 kubelet[2567]: I1009 07:16:22.155515 2567 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 9 07:16:22.670223 systemd[1]: Created slice kubepods-besteffort-pod1985f8c5_7207_4d18_af91_2bd5246bf22d.slice - libcontainer container kubepods-besteffort-pod1985f8c5_7207_4d18_af91_2bd5246bf22d.slice. Oct 9 07:16:22.750379 kubelet[2567]: I1009 07:16:22.750202 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1985f8c5-7207-4d18-af91-2bd5246bf22d-kube-proxy\") pod \"kube-proxy-rpqr6\" (UID: \"1985f8c5-7207-4d18-af91-2bd5246bf22d\") " pod="kube-system/kube-proxy-rpqr6" Oct 9 07:16:22.750379 kubelet[2567]: I1009 07:16:22.750251 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1985f8c5-7207-4d18-af91-2bd5246bf22d-xtables-lock\") pod \"kube-proxy-rpqr6\" (UID: \"1985f8c5-7207-4d18-af91-2bd5246bf22d\") " pod="kube-system/kube-proxy-rpqr6" Oct 9 07:16:22.750379 kubelet[2567]: I1009 07:16:22.750280 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfxsx\" (UniqueName: \"kubernetes.io/projected/1985f8c5-7207-4d18-af91-2bd5246bf22d-kube-api-access-zfxsx\") pod \"kube-proxy-rpqr6\" (UID: \"1985f8c5-7207-4d18-af91-2bd5246bf22d\") " pod="kube-system/kube-proxy-rpqr6" Oct 9 07:16:22.750379 kubelet[2567]: I1009 07:16:22.750309 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1985f8c5-7207-4d18-af91-2bd5246bf22d-lib-modules\") pod \"kube-proxy-rpqr6\" (UID: \"1985f8c5-7207-4d18-af91-2bd5246bf22d\") " pod="kube-system/kube-proxy-rpqr6" Oct 9 07:16:22.893534 kubelet[2567]: E1009 07:16:22.893356 2567 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Oct 9 07:16:22.894935 kubelet[2567]: E1009 07:16:22.893807 2567 projected.go:194] Error preparing data for projected volume kube-api-access-zfxsx for pod kube-system/kube-proxy-rpqr6: configmap "kube-root-ca.crt" not found Oct 9 07:16:22.903060 kubelet[2567]: E1009 07:16:22.902983 2567 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1985f8c5-7207-4d18-af91-2bd5246bf22d-kube-api-access-zfxsx podName:1985f8c5-7207-4d18-af91-2bd5246bf22d nodeName:}" failed. No retries permitted until 2024-10-09 07:16:23.402078717 +0000 UTC m=+6.198644157 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-zfxsx" (UniqueName: "kubernetes.io/projected/1985f8c5-7207-4d18-af91-2bd5246bf22d-kube-api-access-zfxsx") pod "kube-proxy-rpqr6" (UID: "1985f8c5-7207-4d18-af91-2bd5246bf22d") : configmap "kube-root-ca.crt" not found Oct 9 07:16:23.572469 systemd[1]: Created slice kubepods-besteffort-pod6a484b35_2607_40ff_ad19_2660c30f6884.slice - libcontainer container kubepods-besteffort-pod6a484b35_2607_40ff_ad19_2660c30f6884.slice. Oct 9 07:16:23.582048 containerd[1457]: time="2024-10-09T07:16:23.580842962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rpqr6,Uid:1985f8c5-7207-4d18-af91-2bd5246bf22d,Namespace:kube-system,Attempt:0,}" Oct 9 07:16:23.633346 containerd[1457]: time="2024-10-09T07:16:23.632596323Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:16:23.633346 containerd[1457]: time="2024-10-09T07:16:23.632675486Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:16:23.633346 containerd[1457]: time="2024-10-09T07:16:23.632702221Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:16:23.633346 containerd[1457]: time="2024-10-09T07:16:23.632722292Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:16:23.656366 kubelet[2567]: I1009 07:16:23.656317 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6a484b35-2607-40ff-ad19-2660c30f6884-var-lib-calico\") pod \"tigera-operator-55748b469f-p5pxc\" (UID: \"6a484b35-2607-40ff-ad19-2660c30f6884\") " pod="tigera-operator/tigera-operator-55748b469f-p5pxc" Oct 9 07:16:23.663946 kubelet[2567]: I1009 07:16:23.660104 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzjmb\" (UniqueName: \"kubernetes.io/projected/6a484b35-2607-40ff-ad19-2660c30f6884-kube-api-access-fzjmb\") pod \"tigera-operator-55748b469f-p5pxc\" (UID: \"6a484b35-2607-40ff-ad19-2660c30f6884\") " pod="tigera-operator/tigera-operator-55748b469f-p5pxc" Oct 9 07:16:23.677450 systemd[1]: Started cri-containerd-418d4913585b657b1ea75d48a25b87e55a4a0b671b64249d08888e126dc4173a.scope - libcontainer container 418d4913585b657b1ea75d48a25b87e55a4a0b671b64249d08888e126dc4173a. Oct 9 07:16:23.711729 containerd[1457]: time="2024-10-09T07:16:23.711660689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rpqr6,Uid:1985f8c5-7207-4d18-af91-2bd5246bf22d,Namespace:kube-system,Attempt:0,} returns sandbox id \"418d4913585b657b1ea75d48a25b87e55a4a0b671b64249d08888e126dc4173a\"" Oct 9 07:16:23.718192 containerd[1457]: time="2024-10-09T07:16:23.717509173Z" level=info msg="CreateContainer within sandbox \"418d4913585b657b1ea75d48a25b87e55a4a0b671b64249d08888e126dc4173a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 9 07:16:23.748302 containerd[1457]: time="2024-10-09T07:16:23.748235206Z" level=info msg="CreateContainer within sandbox \"418d4913585b657b1ea75d48a25b87e55a4a0b671b64249d08888e126dc4173a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1d76c5d8f8cea4131bd94c6cdf2f2a162f555e7103f0e89d3e2870fe2ba54523\"" Oct 9 07:16:23.749508 containerd[1457]: time="2024-10-09T07:16:23.749437410Z" level=info msg="StartContainer for \"1d76c5d8f8cea4131bd94c6cdf2f2a162f555e7103f0e89d3e2870fe2ba54523\"" Oct 9 07:16:23.806205 systemd[1]: Started cri-containerd-1d76c5d8f8cea4131bd94c6cdf2f2a162f555e7103f0e89d3e2870fe2ba54523.scope - libcontainer container 1d76c5d8f8cea4131bd94c6cdf2f2a162f555e7103f0e89d3e2870fe2ba54523. Oct 9 07:16:23.813536 sudo[1704]: pam_unix(sudo:session): session closed for user root Oct 9 07:16:23.844074 containerd[1457]: time="2024-10-09T07:16:23.843924962Z" level=info msg="StartContainer for \"1d76c5d8f8cea4131bd94c6cdf2f2a162f555e7103f0e89d3e2870fe2ba54523\" returns successfully" Oct 9 07:16:23.879737 containerd[1457]: time="2024-10-09T07:16:23.879679392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-55748b469f-p5pxc,Uid:6a484b35-2607-40ff-ad19-2660c30f6884,Namespace:tigera-operator,Attempt:0,}" Oct 9 07:16:23.973272 containerd[1457]: time="2024-10-09T07:16:23.972757363Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:16:23.973272 containerd[1457]: time="2024-10-09T07:16:23.972970752Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:16:23.973272 containerd[1457]: time="2024-10-09T07:16:23.973086810Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:16:23.973687 containerd[1457]: time="2024-10-09T07:16:23.973136162Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:16:24.013231 systemd[1]: Started cri-containerd-c06e1528b3aeea45dd47a6e8b4354b2cbd1f99235c43d20740f0007b855f3842.scope - libcontainer container c06e1528b3aeea45dd47a6e8b4354b2cbd1f99235c43d20740f0007b855f3842. Oct 9 07:16:24.037861 sshd[1701]: pam_unix(sshd:session): session closed for user core Oct 9 07:16:24.044851 systemd[1]: sshd@8-172.24.4.220:22-172.24.4.1:46420.service: Deactivated successfully. Oct 9 07:16:24.051184 systemd[1]: session-11.scope: Deactivated successfully. Oct 9 07:16:24.053428 systemd[1]: session-11.scope: Consumed 7.627s CPU time, 100.2M memory peak, 0B memory swap peak. Oct 9 07:16:24.055238 systemd-logind[1438]: Session 11 logged out. Waiting for processes to exit. Oct 9 07:16:24.059613 systemd-logind[1438]: Removed session 11. Oct 9 07:16:24.081302 containerd[1457]: time="2024-10-09T07:16:24.081224055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-55748b469f-p5pxc,Uid:6a484b35-2607-40ff-ad19-2660c30f6884,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"c06e1528b3aeea45dd47a6e8b4354b2cbd1f99235c43d20740f0007b855f3842\"" Oct 9 07:16:24.085919 containerd[1457]: time="2024-10-09T07:16:24.084835025Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\"" Oct 9 07:16:24.563237 kubelet[2567]: I1009 07:16:24.563157 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rpqr6" podStartSLOduration=2.56311947 podStartE2EDuration="2.56311947s" podCreationTimestamp="2024-10-09 07:16:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 07:16:24.562920112 +0000 UTC m=+7.359485572" watchObservedRunningTime="2024-10-09 07:16:24.56311947 +0000 UTC m=+7.359684910" Oct 9 07:16:25.645756 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3844923120.mount: Deactivated successfully. Oct 9 07:16:26.839556 containerd[1457]: time="2024-10-09T07:16:26.839456108Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:16:26.841515 containerd[1457]: time="2024-10-09T07:16:26.841190371Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.3: active requests=0, bytes read=22136533" Oct 9 07:16:26.844049 containerd[1457]: time="2024-10-09T07:16:26.842674657Z" level=info msg="ImageCreate event name:\"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:16:26.846328 containerd[1457]: time="2024-10-09T07:16:26.846290211Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:16:26.847500 containerd[1457]: time="2024-10-09T07:16:26.847459449Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.3\" with image id \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\", repo tag \"quay.io/tigera/operator:v1.34.3\", repo digest \"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\", size \"22130728\" in 2.762582049s" Oct 9 07:16:26.847561 containerd[1457]: time="2024-10-09T07:16:26.847501144Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\" returns image reference \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\"" Oct 9 07:16:26.896803 containerd[1457]: time="2024-10-09T07:16:26.896756244Z" level=info msg="CreateContainer within sandbox \"c06e1528b3aeea45dd47a6e8b4354b2cbd1f99235c43d20740f0007b855f3842\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 9 07:16:26.913212 containerd[1457]: time="2024-10-09T07:16:26.913168358Z" level=info msg="CreateContainer within sandbox \"c06e1528b3aeea45dd47a6e8b4354b2cbd1f99235c43d20740f0007b855f3842\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"3b50f426db60a8160a7d6817432c33e3510ef3f31bfd904a150223284b6e3ad2\"" Oct 9 07:16:26.914720 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount458173925.mount: Deactivated successfully. Oct 9 07:16:26.917281 containerd[1457]: time="2024-10-09T07:16:26.917221287Z" level=info msg="StartContainer for \"3b50f426db60a8160a7d6817432c33e3510ef3f31bfd904a150223284b6e3ad2\"" Oct 9 07:16:26.958879 systemd[1]: run-containerd-runc-k8s.io-3b50f426db60a8160a7d6817432c33e3510ef3f31bfd904a150223284b6e3ad2-runc.lz3uwu.mount: Deactivated successfully. Oct 9 07:16:26.970188 systemd[1]: Started cri-containerd-3b50f426db60a8160a7d6817432c33e3510ef3f31bfd904a150223284b6e3ad2.scope - libcontainer container 3b50f426db60a8160a7d6817432c33e3510ef3f31bfd904a150223284b6e3ad2. Oct 9 07:16:27.041058 containerd[1457]: time="2024-10-09T07:16:27.040290607Z" level=info msg="StartContainer for \"3b50f426db60a8160a7d6817432c33e3510ef3f31bfd904a150223284b6e3ad2\" returns successfully" Oct 9 07:16:28.041791 kubelet[2567]: I1009 07:16:28.040446 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-55748b469f-p5pxc" podStartSLOduration=2.258542399 podStartE2EDuration="5.040145356s" podCreationTimestamp="2024-10-09 07:16:23 +0000 UTC" firstStartedPulling="2024-10-09 07:16:24.082935528 +0000 UTC m=+6.879500968" lastFinishedPulling="2024-10-09 07:16:26.864538484 +0000 UTC m=+9.661103925" observedRunningTime="2024-10-09 07:16:27.618335996 +0000 UTC m=+10.414901486" watchObservedRunningTime="2024-10-09 07:16:28.040145356 +0000 UTC m=+10.836710846" Oct 9 07:16:30.637069 systemd[1]: Created slice kubepods-besteffort-pod4623bb4d_94f1_4296_ac99_44a829f7103a.slice - libcontainer container kubepods-besteffort-pod4623bb4d_94f1_4296_ac99_44a829f7103a.slice. Oct 9 07:16:30.709906 kubelet[2567]: I1009 07:16:30.709828 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4623bb4d-94f1-4296-ac99-44a829f7103a-tigera-ca-bundle\") pod \"calico-typha-7dccbfb97c-zdtxq\" (UID: \"4623bb4d-94f1-4296-ac99-44a829f7103a\") " pod="calico-system/calico-typha-7dccbfb97c-zdtxq" Oct 9 07:16:30.711780 kubelet[2567]: I1009 07:16:30.711589 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n727x\" (UniqueName: \"kubernetes.io/projected/4623bb4d-94f1-4296-ac99-44a829f7103a-kube-api-access-n727x\") pod \"calico-typha-7dccbfb97c-zdtxq\" (UID: \"4623bb4d-94f1-4296-ac99-44a829f7103a\") " pod="calico-system/calico-typha-7dccbfb97c-zdtxq" Oct 9 07:16:30.711780 kubelet[2567]: I1009 07:16:30.711698 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/4623bb4d-94f1-4296-ac99-44a829f7103a-typha-certs\") pod \"calico-typha-7dccbfb97c-zdtxq\" (UID: \"4623bb4d-94f1-4296-ac99-44a829f7103a\") " pod="calico-system/calico-typha-7dccbfb97c-zdtxq" Oct 9 07:16:30.852819 systemd[1]: Created slice kubepods-besteffort-pod93614f38_75c7_4c69_a101_478ab75a3c90.slice - libcontainer container kubepods-besteffort-pod93614f38_75c7_4c69_a101_478ab75a3c90.slice. Oct 9 07:16:30.914333 kubelet[2567]: I1009 07:16:30.913448 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/93614f38-75c7-4c69-a101-478ab75a3c90-lib-modules\") pod \"calico-node-ptndd\" (UID: \"93614f38-75c7-4c69-a101-478ab75a3c90\") " pod="calico-system/calico-node-ptndd" Oct 9 07:16:30.914333 kubelet[2567]: I1009 07:16:30.913526 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5b7d\" (UniqueName: \"kubernetes.io/projected/93614f38-75c7-4c69-a101-478ab75a3c90-kube-api-access-l5b7d\") pod \"calico-node-ptndd\" (UID: \"93614f38-75c7-4c69-a101-478ab75a3c90\") " pod="calico-system/calico-node-ptndd" Oct 9 07:16:30.914333 kubelet[2567]: I1009 07:16:30.913567 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/93614f38-75c7-4c69-a101-478ab75a3c90-cni-log-dir\") pod \"calico-node-ptndd\" (UID: \"93614f38-75c7-4c69-a101-478ab75a3c90\") " pod="calico-system/calico-node-ptndd" Oct 9 07:16:30.914333 kubelet[2567]: I1009 07:16:30.913592 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/93614f38-75c7-4c69-a101-478ab75a3c90-policysync\") pod \"calico-node-ptndd\" (UID: \"93614f38-75c7-4c69-a101-478ab75a3c90\") " pod="calico-system/calico-node-ptndd" Oct 9 07:16:30.914333 kubelet[2567]: I1009 07:16:30.913620 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/93614f38-75c7-4c69-a101-478ab75a3c90-node-certs\") pod \"calico-node-ptndd\" (UID: \"93614f38-75c7-4c69-a101-478ab75a3c90\") " pod="calico-system/calico-node-ptndd" Oct 9 07:16:30.915101 kubelet[2567]: I1009 07:16:30.913642 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/93614f38-75c7-4c69-a101-478ab75a3c90-var-run-calico\") pod \"calico-node-ptndd\" (UID: \"93614f38-75c7-4c69-a101-478ab75a3c90\") " pod="calico-system/calico-node-ptndd" Oct 9 07:16:30.915101 kubelet[2567]: I1009 07:16:30.913727 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/93614f38-75c7-4c69-a101-478ab75a3c90-tigera-ca-bundle\") pod \"calico-node-ptndd\" (UID: \"93614f38-75c7-4c69-a101-478ab75a3c90\") " pod="calico-system/calico-node-ptndd" Oct 9 07:16:30.915101 kubelet[2567]: I1009 07:16:30.913770 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/93614f38-75c7-4c69-a101-478ab75a3c90-var-lib-calico\") pod \"calico-node-ptndd\" (UID: \"93614f38-75c7-4c69-a101-478ab75a3c90\") " pod="calico-system/calico-node-ptndd" Oct 9 07:16:30.915101 kubelet[2567]: I1009 07:16:30.913795 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/93614f38-75c7-4c69-a101-478ab75a3c90-flexvol-driver-host\") pod \"calico-node-ptndd\" (UID: \"93614f38-75c7-4c69-a101-478ab75a3c90\") " pod="calico-system/calico-node-ptndd" Oct 9 07:16:30.915101 kubelet[2567]: I1009 07:16:30.913816 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/93614f38-75c7-4c69-a101-478ab75a3c90-cni-net-dir\") pod \"calico-node-ptndd\" (UID: \"93614f38-75c7-4c69-a101-478ab75a3c90\") " pod="calico-system/calico-node-ptndd" Oct 9 07:16:30.915272 kubelet[2567]: I1009 07:16:30.913841 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/93614f38-75c7-4c69-a101-478ab75a3c90-xtables-lock\") pod \"calico-node-ptndd\" (UID: \"93614f38-75c7-4c69-a101-478ab75a3c90\") " pod="calico-system/calico-node-ptndd" Oct 9 07:16:30.915272 kubelet[2567]: I1009 07:16:30.913864 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/93614f38-75c7-4c69-a101-478ab75a3c90-cni-bin-dir\") pod \"calico-node-ptndd\" (UID: \"93614f38-75c7-4c69-a101-478ab75a3c90\") " pod="calico-system/calico-node-ptndd" Oct 9 07:16:30.929193 kubelet[2567]: E1009 07:16:30.929081 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cpr56" podUID="5460cbf3-3220-44d8-92e5-2d3cb02a666f" Oct 9 07:16:30.945886 containerd[1457]: time="2024-10-09T07:16:30.945100924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7dccbfb97c-zdtxq,Uid:4623bb4d-94f1-4296-ac99-44a829f7103a,Namespace:calico-system,Attempt:0,}" Oct 9 07:16:30.995753 containerd[1457]: time="2024-10-09T07:16:30.995633277Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:16:30.995753 containerd[1457]: time="2024-10-09T07:16:30.995712925Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:16:30.996394 containerd[1457]: time="2024-10-09T07:16:30.996127040Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:16:30.997674 containerd[1457]: time="2024-10-09T07:16:30.996168813Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:16:31.015280 kubelet[2567]: I1009 07:16:31.014487 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/5460cbf3-3220-44d8-92e5-2d3cb02a666f-varrun\") pod \"csi-node-driver-cpr56\" (UID: \"5460cbf3-3220-44d8-92e5-2d3cb02a666f\") " pod="calico-system/csi-node-driver-cpr56" Oct 9 07:16:31.018805 kubelet[2567]: I1009 07:16:31.017136 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5460cbf3-3220-44d8-92e5-2d3cb02a666f-kubelet-dir\") pod \"csi-node-driver-cpr56\" (UID: \"5460cbf3-3220-44d8-92e5-2d3cb02a666f\") " pod="calico-system/csi-node-driver-cpr56" Oct 9 07:16:31.018805 kubelet[2567]: I1009 07:16:31.017219 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5460cbf3-3220-44d8-92e5-2d3cb02a666f-socket-dir\") pod \"csi-node-driver-cpr56\" (UID: \"5460cbf3-3220-44d8-92e5-2d3cb02a666f\") " pod="calico-system/csi-node-driver-cpr56" Oct 9 07:16:31.018805 kubelet[2567]: I1009 07:16:31.017344 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvjqf\" (UniqueName: \"kubernetes.io/projected/5460cbf3-3220-44d8-92e5-2d3cb02a666f-kube-api-access-rvjqf\") pod \"csi-node-driver-cpr56\" (UID: \"5460cbf3-3220-44d8-92e5-2d3cb02a666f\") " pod="calico-system/csi-node-driver-cpr56" Oct 9 07:16:31.018805 kubelet[2567]: I1009 07:16:31.017410 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5460cbf3-3220-44d8-92e5-2d3cb02a666f-registration-dir\") pod \"csi-node-driver-cpr56\" (UID: \"5460cbf3-3220-44d8-92e5-2d3cb02a666f\") " pod="calico-system/csi-node-driver-cpr56" Oct 9 07:16:31.041202 systemd[1]: Started cri-containerd-ed817552983211c1d251fb7ed4339a823af9ab10c858e3b640ab1148a3f050cc.scope - libcontainer container ed817552983211c1d251fb7ed4339a823af9ab10c858e3b640ab1148a3f050cc. Oct 9 07:16:31.044351 kubelet[2567]: E1009 07:16:31.043760 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:16:31.044351 kubelet[2567]: W1009 07:16:31.043803 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:16:31.044351 kubelet[2567]: E1009 07:16:31.043825 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:16:31.054773 kubelet[2567]: E1009 07:16:31.054731 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:16:31.054773 kubelet[2567]: W1009 07:16:31.054753 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:16:31.055593 kubelet[2567]: E1009 07:16:31.054773 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:16:31.118942 kubelet[2567]: E1009 07:16:31.118905 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:16:31.118942 kubelet[2567]: W1009 07:16:31.118930 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:16:31.118942 kubelet[2567]: E1009 07:16:31.118951 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:16:31.119751 kubelet[2567]: E1009 07:16:31.119423 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:16:31.119751 kubelet[2567]: W1009 07:16:31.119439 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:16:31.119751 kubelet[2567]: E1009 07:16:31.119450 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:16:31.120391 kubelet[2567]: E1009 07:16:31.119985 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:16:31.120391 kubelet[2567]: W1009 07:16:31.120011 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:16:31.120391 kubelet[2567]: E1009 07:16:31.120084 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:16:31.120681 kubelet[2567]: E1009 07:16:31.120668 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:16:31.120751 kubelet[2567]: W1009 07:16:31.120740 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:16:31.120835 kubelet[2567]: E1009 07:16:31.120814 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:16:31.121210 kubelet[2567]: E1009 07:16:31.121167 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:16:31.121210 kubelet[2567]: W1009 07:16:31.121185 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:16:31.121435 kubelet[2567]: E1009 07:16:31.121326 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:16:31.121698 kubelet[2567]: E1009 07:16:31.121640 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:16:31.121698 kubelet[2567]: W1009 07:16:31.121658 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:16:31.122112 kubelet[2567]: E1009 07:16:31.121802 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:16:31.124581 kubelet[2567]: E1009 07:16:31.124127 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:16:31.124581 kubelet[2567]: W1009 07:16:31.124144 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:16:31.124581 kubelet[2567]: E1009 07:16:31.124188 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:16:31.125640 kubelet[2567]: E1009 07:16:31.124571 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:16:31.125640 kubelet[2567]: W1009 07:16:31.125635 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:16:31.125946 kubelet[2567]: E1009 07:16:31.125924 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:16:31.126301 kubelet[2567]: E1009 07:16:31.126281 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:16:31.126301 kubelet[2567]: W1009 07:16:31.126296 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:16:31.126446 kubelet[2567]: E1009 07:16:31.126415 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:16:31.126707 kubelet[2567]: E1009 07:16:31.126668 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:16:31.126707 kubelet[2567]: W1009 07:16:31.126686 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:16:31.126878 kubelet[2567]: E1009 07:16:31.126789 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:16:31.128457 kubelet[2567]: E1009 07:16:31.128414 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:16:31.128457 kubelet[2567]: W1009 07:16:31.128434 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:16:31.128733 kubelet[2567]: E1009 07:16:31.128625 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:16:31.130776 kubelet[2567]: E1009 07:16:31.129441 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:16:31.130776 kubelet[2567]: W1009 07:16:31.129458 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:16:31.132180 kubelet[2567]: E1009 07:16:31.132162 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:16:31.134604 kubelet[2567]: E1009 07:16:31.134566 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:16:31.134604 kubelet[2567]: W1009 07:16:31.134596 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:16:31.135136 kubelet[2567]: E1009 07:16:31.135100 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:16:31.135221 kubelet[2567]: E1009 07:16:31.135201 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:16:31.135221 kubelet[2567]: W1009 07:16:31.135219 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:16:31.135423 kubelet[2567]: E1009 07:16:31.135311 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:16:31.135610 kubelet[2567]: E1009 07:16:31.135577 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:16:31.135610 kubelet[2567]: W1009 07:16:31.135593 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:16:31.135745 kubelet[2567]: E1009 07:16:31.135722 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:16:31.135946 kubelet[2567]: E1009 07:16:31.135911 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:16:31.135946 kubelet[2567]: W1009 07:16:31.135941 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:16:31.136104 kubelet[2567]: E1009 07:16:31.136051 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:16:31.136433 kubelet[2567]: E1009 07:16:31.136415 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:16:31.136433 kubelet[2567]: W1009 07:16:31.136429 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:16:31.136854 kubelet[2567]: E1009 07:16:31.136518 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:16:31.137112 kubelet[2567]: E1009 07:16:31.137091 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:16:31.137112 kubelet[2567]: W1009 07:16:31.137106 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:16:31.137417 kubelet[2567]: E1009 07:16:31.137393 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:16:31.137919 kubelet[2567]: E1009 07:16:31.137886 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:16:31.137919 kubelet[2567]: W1009 07:16:31.137901 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:16:31.138907 kubelet[2567]: E1009 07:16:31.138884 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:16:31.139205 kubelet[2567]: E1009 07:16:31.139163 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:16:31.139205 kubelet[2567]: W1009 07:16:31.139181 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:16:31.139758 kubelet[2567]: E1009 07:16:31.139525 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:16:31.140317 kubelet[2567]: E1009 07:16:31.140297 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:16:31.140317 kubelet[2567]: W1009 07:16:31.140313 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:16:31.141166 kubelet[2567]: E1009 07:16:31.140398 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:16:31.141166 kubelet[2567]: E1009 07:16:31.140475 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:16:31.141166 kubelet[2567]: W1009 07:16:31.140484 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:16:31.141166 kubelet[2567]: E1009 07:16:31.140574 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:16:31.141166 kubelet[2567]: E1009 07:16:31.140694 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:16:31.141166 kubelet[2567]: W1009 07:16:31.140703 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:16:31.141166 kubelet[2567]: E1009 07:16:31.140783 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:16:31.141166 kubelet[2567]: E1009 07:16:31.140885 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:16:31.141166 kubelet[2567]: W1009 07:16:31.140893 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:16:31.141166 kubelet[2567]: E1009 07:16:31.140905 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:16:31.143233 kubelet[2567]: E1009 07:16:31.143105 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:16:31.143233 kubelet[2567]: W1009 07:16:31.143121 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:16:31.143233 kubelet[2567]: E1009 07:16:31.143130 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:16:31.149703 containerd[1457]: time="2024-10-09T07:16:31.149643723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7dccbfb97c-zdtxq,Uid:4623bb4d-94f1-4296-ac99-44a829f7103a,Namespace:calico-system,Attempt:0,} returns sandbox id \"ed817552983211c1d251fb7ed4339a823af9ab10c858e3b640ab1148a3f050cc\"" Oct 9 07:16:31.159202 containerd[1457]: time="2024-10-09T07:16:31.155981378Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\"" Oct 9 07:16:31.162514 containerd[1457]: time="2024-10-09T07:16:31.162107633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ptndd,Uid:93614f38-75c7-4c69-a101-478ab75a3c90,Namespace:calico-system,Attempt:0,}" Oct 9 07:16:31.162631 kubelet[2567]: E1009 07:16:31.162604 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:16:31.162631 kubelet[2567]: W1009 07:16:31.162620 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:16:31.162702 kubelet[2567]: E1009 07:16:31.162641 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:16:31.207073 containerd[1457]: time="2024-10-09T07:16:31.203726374Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:16:31.207073 containerd[1457]: time="2024-10-09T07:16:31.203804198Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:16:31.213950 containerd[1457]: time="2024-10-09T07:16:31.207393139Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:16:31.213950 containerd[1457]: time="2024-10-09T07:16:31.207460563Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:16:31.242289 systemd[1]: Started cri-containerd-0756c3a8ea503758084127275a1eaaa59160e1ed150b662ecf62052ce85f8a7b.scope - libcontainer container 0756c3a8ea503758084127275a1eaaa59160e1ed150b662ecf62052ce85f8a7b. Oct 9 07:16:31.302423 containerd[1457]: time="2024-10-09T07:16:31.302262258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ptndd,Uid:93614f38-75c7-4c69-a101-478ab75a3c90,Namespace:calico-system,Attempt:0,} returns sandbox id \"0756c3a8ea503758084127275a1eaaa59160e1ed150b662ecf62052ce85f8a7b\"" Oct 9 07:16:32.441150 kubelet[2567]: E1009 07:16:32.441052 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cpr56" podUID="5460cbf3-3220-44d8-92e5-2d3cb02a666f" Oct 9 07:16:34.440742 kubelet[2567]: E1009 07:16:34.440633 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cpr56" podUID="5460cbf3-3220-44d8-92e5-2d3cb02a666f" Oct 9 07:16:34.466575 containerd[1457]: time="2024-10-09T07:16:34.466510978Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:16:34.467885 containerd[1457]: time="2024-10-09T07:16:34.467754833Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.1: active requests=0, bytes read=29471335" Oct 9 07:16:34.469614 containerd[1457]: time="2024-10-09T07:16:34.469545512Z" level=info msg="ImageCreate event name:\"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:16:34.477011 containerd[1457]: time="2024-10-09T07:16:34.475691088Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:16:34.480601 containerd[1457]: time="2024-10-09T07:16:34.480548173Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.1\" with image id \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\", size \"30963728\" in 3.322575423s" Oct 9 07:16:34.481487 containerd[1457]: time="2024-10-09T07:16:34.481406119Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\" returns image reference \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\"" Oct 9 07:16:34.483217 containerd[1457]: time="2024-10-09T07:16:34.483188211Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\"" Oct 9 07:16:34.507311 containerd[1457]: time="2024-10-09T07:16:34.507129857Z" level=info msg="CreateContainer within sandbox \"ed817552983211c1d251fb7ed4339a823af9ab10c858e3b640ab1148a3f050cc\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 9 07:16:34.535781 containerd[1457]: time="2024-10-09T07:16:34.535732524Z" level=info msg="CreateContainer within sandbox \"ed817552983211c1d251fb7ed4339a823af9ab10c858e3b640ab1148a3f050cc\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"8826d46540d1ba7f180a42bb6e4cd2426db1d9845ab6230787af1b0f9ad97323\"" Oct 9 07:16:34.536996 containerd[1457]: time="2024-10-09T07:16:34.536556313Z" level=info msg="StartContainer for \"8826d46540d1ba7f180a42bb6e4cd2426db1d9845ab6230787af1b0f9ad97323\"" Oct 9 07:16:34.589670 systemd[1]: Started cri-containerd-8826d46540d1ba7f180a42bb6e4cd2426db1d9845ab6230787af1b0f9ad97323.scope - libcontainer container 8826d46540d1ba7f180a42bb6e4cd2426db1d9845ab6230787af1b0f9ad97323. Oct 9 07:16:34.665512 containerd[1457]: time="2024-10-09T07:16:34.665454145Z" level=info msg="StartContainer for \"8826d46540d1ba7f180a42bb6e4cd2426db1d9845ab6230787af1b0f9ad97323\" returns successfully" Oct 9 07:16:35.649757 kubelet[2567]: I1009 07:16:35.649669 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7dccbfb97c-zdtxq" podStartSLOduration=2.320369188 podStartE2EDuration="5.649652337s" podCreationTimestamp="2024-10-09 07:16:30 +0000 UTC" firstStartedPulling="2024-10-09 07:16:31.153510195 +0000 UTC m=+13.950075635" lastFinishedPulling="2024-10-09 07:16:34.482793334 +0000 UTC m=+17.279358784" observedRunningTime="2024-10-09 07:16:35.638670198 +0000 UTC m=+18.435235698" watchObservedRunningTime="2024-10-09 07:16:35.649652337 +0000 UTC m=+18.446217777" Oct 9 07:16:35.677860 kubelet[2567]: E1009 07:16:35.677808 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:16:35.677860 kubelet[2567]: W1009 07:16:35.677838 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:16:35.677860 kubelet[2567]: E1009 07:16:35.677864 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:16:35.678596 kubelet[2567]: E1009 07:16:35.678203 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:16:35.678596 kubelet[2567]: W1009 07:16:35.678221 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:16:35.678596 kubelet[2567]: E1009 07:16:35.678232 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:16:35.678952 kubelet[2567]: E1009 07:16:35.678930 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:16:35.678952 kubelet[2567]: W1009 07:16:35.678947 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:16:35.679052 kubelet[2567]: E1009 07:16:35.678959 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:16:35.680088 kubelet[2567]: E1009 07:16:35.680065 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:16:35.680088 kubelet[2567]: W1009 07:16:35.680082 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:16:35.680171 kubelet[2567]: E1009 07:16:35.680094 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:16:35.680286 kubelet[2567]: E1009 07:16:35.680256 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:16:35.680286 kubelet[2567]: W1009 07:16:35.680272 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:16:35.680286 kubelet[2567]: E1009 07:16:35.680284 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:16:35.682035 kubelet[2567]: E1009 07:16:35.680424 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:16:35.682035 kubelet[2567]: W1009 07:16:35.680439 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:16:35.682035 kubelet[2567]: E1009 07:16:35.680448 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:16:35.682035 kubelet[2567]: E1009 07:16:35.680598 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:16:35.682035 kubelet[2567]: W1009 07:16:35.680606 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:16:35.682035 kubelet[2567]: E1009 07:16:35.680614 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:16:35.682035 kubelet[2567]: E1009 07:16:35.680743 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:16:35.682035 kubelet[2567]: W1009 07:16:35.680752 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:16:35.682035 kubelet[2567]: E1009 07:16:35.680761 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:16:35.682290 kubelet[2567]: E1009 07:16:35.682143 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:16:35.682290 kubelet[2567]: W1009 07:16:35.682153 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:16:35.682290 kubelet[2567]: E1009 07:16:35.682165 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:16:35.682410 kubelet[2567]: E1009 07:16:35.682312 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:16:35.682410 kubelet[2567]: W1009 07:16:35.682322 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:16:35.682410 kubelet[2567]: E1009 07:16:35.682330 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:16:35.682486 kubelet[2567]: E1009 07:16:35.682460 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:16:35.682486 kubelet[2567]: W1009 07:16:35.682469 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:16:35.682486 kubelet[2567]: E1009 07:16:35.682477 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:16:35.682636 kubelet[2567]: E1009 07:16:35.682613 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:16:35.682636 kubelet[2567]: W1009 07:16:35.682630 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:16:35.682715 kubelet[2567]: E1009 07:16:35.682638 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:16:35.682802 kubelet[2567]: E1009 07:16:35.682781 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:16:35.682802 kubelet[2567]: W1009 07:16:35.682796 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:16:35.682880 kubelet[2567]: E1009 07:16:35.682805 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:16:35.682983 kubelet[2567]: E1009 07:16:35.682961 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:16:35.682983 kubelet[2567]: W1009 07:16:35.682977 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:16:35.683077 kubelet[2567]: E1009 07:16:35.682986 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:16:35.683517 kubelet[2567]: E1009 07:16:35.683486 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:16:35.683517 kubelet[2567]: W1009 07:16:35.683504 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:16:35.683517 kubelet[2567]: E1009 07:16:35.683516 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:16:35.763591 kubelet[2567]: E1009 07:16:35.763556 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:16:35.763591 kubelet[2567]: W1009 07:16:35.763581 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:16:35.763738 kubelet[2567]: E1009 07:16:35.763603 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:16:35.763771 kubelet[2567]: E1009 07:16:35.763761 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:16:35.763771 kubelet[2567]: W1009 07:16:35.763771 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:16:35.764267 kubelet[2567]: E1009 07:16:35.763780 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:16:35.764267 kubelet[2567]: E1009 07:16:35.763918 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:16:35.764267 kubelet[2567]: W1009 07:16:35.763927 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:16:35.764267 kubelet[2567]: E1009 07:16:35.763936 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:16:35.764267 kubelet[2567]: E1009 07:16:35.764120 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:16:35.764267 kubelet[2567]: W1009 07:16:35.764129 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:16:35.764267 kubelet[2567]: E1009 07:16:35.764140 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:16:35.765482 kubelet[2567]: E1009 07:16:35.764755 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:16:35.765482 kubelet[2567]: W1009 07:16:35.764766 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:16:35.765482 kubelet[2567]: E1009 07:16:35.764795 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:16:35.765482 kubelet[2567]: E1009 07:16:35.764980 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:16:35.765482 kubelet[2567]: W1009 07:16:35.764989 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:16:35.765482 kubelet[2567]: E1009 07:16:35.765005 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:16:35.765482 kubelet[2567]: E1009 07:16:35.765200 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:16:35.765482 kubelet[2567]: W1009 07:16:35.765211 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:16:35.765482 kubelet[2567]: E1009 07:16:35.765236 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:16:35.765482 kubelet[2567]: E1009 07:16:35.765407 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:16:35.765720 kubelet[2567]: W1009 07:16:35.765416 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:16:35.765720 kubelet[2567]: E1009 07:16:35.765506 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:16:35.765720 kubelet[2567]: E1009 07:16:35.765606 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:16:35.765720 kubelet[2567]: W1009 07:16:35.765615 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:16:35.765720 kubelet[2567]: E1009 07:16:35.765682 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:16:35.766413 kubelet[2567]: E1009 07:16:35.765842 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:16:35.766413 kubelet[2567]: W1009 07:16:35.765874 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:16:35.766413 kubelet[2567]: E1009 07:16:35.765898 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:16:35.766413 kubelet[2567]: E1009 07:16:35.766118 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:16:35.766413 kubelet[2567]: W1009 07:16:35.766127 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:16:35.766413 kubelet[2567]: E1009 07:16:35.766137 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:16:35.767324 kubelet[2567]: E1009 07:16:35.767294 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:16:35.767324 kubelet[2567]: W1009 07:16:35.767313 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:16:35.767409 kubelet[2567]: E1009 07:16:35.767326 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:16:35.767738 kubelet[2567]: E1009 07:16:35.767696 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:16:35.767738 kubelet[2567]: W1009 07:16:35.767710 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:16:35.767738 kubelet[2567]: E1009 07:16:35.767719 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:16:35.768082 kubelet[2567]: E1009 07:16:35.768062 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:16:35.768082 kubelet[2567]: W1009 07:16:35.768078 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:16:35.768160 kubelet[2567]: E1009 07:16:35.768088 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:16:35.768515 kubelet[2567]: E1009 07:16:35.768495 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:16:35.768515 kubelet[2567]: W1009 07:16:35.768512 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:16:35.768580 kubelet[2567]: E1009 07:16:35.768525 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:16:35.769462 kubelet[2567]: E1009 07:16:35.769297 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:16:35.769462 kubelet[2567]: W1009 07:16:35.769335 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:16:35.769462 kubelet[2567]: E1009 07:16:35.769355 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:16:35.769728 kubelet[2567]: E1009 07:16:35.769701 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:16:35.769728 kubelet[2567]: W1009 07:16:35.769718 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:16:35.769728 kubelet[2567]: E1009 07:16:35.769728 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:16:35.770362 kubelet[2567]: E1009 07:16:35.770334 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:16:35.770362 kubelet[2567]: W1009 07:16:35.770350 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:16:35.770443 kubelet[2567]: E1009 07:16:35.770360 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:16:36.371618 containerd[1457]: time="2024-10-09T07:16:36.371528155Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:16:36.375295 containerd[1457]: time="2024-10-09T07:16:36.375202025Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1: active requests=0, bytes read=5141007" Oct 9 07:16:36.377684 containerd[1457]: time="2024-10-09T07:16:36.377608768Z" level=info msg="ImageCreate event name:\"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:16:36.389080 containerd[1457]: time="2024-10-09T07:16:36.388751554Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:16:36.390363 containerd[1457]: time="2024-10-09T07:16:36.390130349Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" with image id \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\", size \"6633368\" in 1.906809709s" Oct 9 07:16:36.390363 containerd[1457]: time="2024-10-09T07:16:36.390222358Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" returns image reference \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\"" Oct 9 07:16:36.394433 containerd[1457]: time="2024-10-09T07:16:36.394359012Z" level=info msg="CreateContainer within sandbox \"0756c3a8ea503758084127275a1eaaa59160e1ed150b662ecf62052ce85f8a7b\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 9 07:16:36.426354 containerd[1457]: time="2024-10-09T07:16:36.426279452Z" level=info msg="CreateContainer within sandbox \"0756c3a8ea503758084127275a1eaaa59160e1ed150b662ecf62052ce85f8a7b\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"9817d3dbbcb423fd59ca3deced2892efbbb1887bed4f886e3d4c95c839e0cb79\"" Oct 9 07:16:36.431088 containerd[1457]: time="2024-10-09T07:16:36.429599499Z" level=info msg="StartContainer for \"9817d3dbbcb423fd59ca3deced2892efbbb1887bed4f886e3d4c95c839e0cb79\"" Oct 9 07:16:36.440378 kubelet[2567]: E1009 07:16:36.440312 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cpr56" podUID="5460cbf3-3220-44d8-92e5-2d3cb02a666f" Oct 9 07:16:36.483322 systemd[1]: run-containerd-runc-k8s.io-9817d3dbbcb423fd59ca3deced2892efbbb1887bed4f886e3d4c95c839e0cb79-runc.kAAw25.mount: Deactivated successfully. Oct 9 07:16:36.491343 systemd[1]: Started cri-containerd-9817d3dbbcb423fd59ca3deced2892efbbb1887bed4f886e3d4c95c839e0cb79.scope - libcontainer container 9817d3dbbcb423fd59ca3deced2892efbbb1887bed4f886e3d4c95c839e0cb79. Oct 9 07:16:36.533343 containerd[1457]: time="2024-10-09T07:16:36.533265986Z" level=info msg="StartContainer for \"9817d3dbbcb423fd59ca3deced2892efbbb1887bed4f886e3d4c95c839e0cb79\" returns successfully" Oct 9 07:16:36.558103 systemd[1]: cri-containerd-9817d3dbbcb423fd59ca3deced2892efbbb1887bed4f886e3d4c95c839e0cb79.scope: Deactivated successfully. Oct 9 07:16:36.599677 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9817d3dbbcb423fd59ca3deced2892efbbb1887bed4f886e3d4c95c839e0cb79-rootfs.mount: Deactivated successfully. Oct 9 07:16:36.610953 kubelet[2567]: I1009 07:16:36.606842 2567 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 9 07:16:37.130938 containerd[1457]: time="2024-10-09T07:16:37.130246252Z" level=info msg="shim disconnected" id=9817d3dbbcb423fd59ca3deced2892efbbb1887bed4f886e3d4c95c839e0cb79 namespace=k8s.io Oct 9 07:16:37.130938 containerd[1457]: time="2024-10-09T07:16:37.130363420Z" level=warning msg="cleaning up after shim disconnected" id=9817d3dbbcb423fd59ca3deced2892efbbb1887bed4f886e3d4c95c839e0cb79 namespace=k8s.io Oct 9 07:16:37.130938 containerd[1457]: time="2024-10-09T07:16:37.130385834Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 07:16:37.615979 containerd[1457]: time="2024-10-09T07:16:37.615159875Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\"" Oct 9 07:16:38.441127 kubelet[2567]: E1009 07:16:38.440727 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cpr56" podUID="5460cbf3-3220-44d8-92e5-2d3cb02a666f" Oct 9 07:16:40.441878 kubelet[2567]: E1009 07:16:40.440580 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cpr56" podUID="5460cbf3-3220-44d8-92e5-2d3cb02a666f" Oct 9 07:16:42.442955 kubelet[2567]: E1009 07:16:42.441162 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cpr56" podUID="5460cbf3-3220-44d8-92e5-2d3cb02a666f" Oct 9 07:16:44.057342 containerd[1457]: time="2024-10-09T07:16:44.056545749Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:16:44.059954 containerd[1457]: time="2024-10-09T07:16:44.058481880Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.1: active requests=0, bytes read=93083736" Oct 9 07:16:44.059954 containerd[1457]: time="2024-10-09T07:16:44.059815093Z" level=info msg="ImageCreate event name:\"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:16:44.063904 containerd[1457]: time="2024-10-09T07:16:44.063798149Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:16:44.065115 containerd[1457]: time="2024-10-09T07:16:44.064728287Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.1\" with image id \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\", size \"94576137\" in 6.449487405s" Oct 9 07:16:44.065115 containerd[1457]: time="2024-10-09T07:16:44.064767262Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\" returns image reference \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\"" Oct 9 07:16:44.071589 containerd[1457]: time="2024-10-09T07:16:44.071499754Z" level=info msg="CreateContainer within sandbox \"0756c3a8ea503758084127275a1eaaa59160e1ed150b662ecf62052ce85f8a7b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 9 07:16:44.276315 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2942560421.mount: Deactivated successfully. Oct 9 07:16:44.286064 containerd[1457]: time="2024-10-09T07:16:44.285934123Z" level=info msg="CreateContainer within sandbox \"0756c3a8ea503758084127275a1eaaa59160e1ed150b662ecf62052ce85f8a7b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a1e60acda2222882ce35e4b960700447c149fe6b732def521d866379184da654\"" Oct 9 07:16:44.287268 containerd[1457]: time="2024-10-09T07:16:44.286924707Z" level=info msg="StartContainer for \"a1e60acda2222882ce35e4b960700447c149fe6b732def521d866379184da654\"" Oct 9 07:16:44.440903 kubelet[2567]: E1009 07:16:44.440342 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cpr56" podUID="5460cbf3-3220-44d8-92e5-2d3cb02a666f" Oct 9 07:16:44.476287 systemd[1]: Started cri-containerd-a1e60acda2222882ce35e4b960700447c149fe6b732def521d866379184da654.scope - libcontainer container a1e60acda2222882ce35e4b960700447c149fe6b732def521d866379184da654. Oct 9 07:16:44.599393 containerd[1457]: time="2024-10-09T07:16:44.598717100Z" level=info msg="StartContainer for \"a1e60acda2222882ce35e4b960700447c149fe6b732def521d866379184da654\" returns successfully" Oct 9 07:16:46.441544 kubelet[2567]: E1009 07:16:46.441404 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cpr56" podUID="5460cbf3-3220-44d8-92e5-2d3cb02a666f" Oct 9 07:16:46.697655 systemd[1]: cri-containerd-a1e60acda2222882ce35e4b960700447c149fe6b732def521d866379184da654.scope: Deactivated successfully. Oct 9 07:16:46.753405 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a1e60acda2222882ce35e4b960700447c149fe6b732def521d866379184da654-rootfs.mount: Deactivated successfully. Oct 9 07:16:47.320117 kubelet[2567]: I1009 07:16:47.318895 2567 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Oct 9 07:16:48.081558 systemd[1]: Created slice kubepods-burstable-pod08f94e6c_424e_4778_9aa7_62a9cbd840ab.slice - libcontainer container kubepods-burstable-pod08f94e6c_424e_4778_9aa7_62a9cbd840ab.slice. Oct 9 07:16:48.093608 containerd[1457]: time="2024-10-09T07:16:48.091992991Z" level=info msg="shim disconnected" id=a1e60acda2222882ce35e4b960700447c149fe6b732def521d866379184da654 namespace=k8s.io Oct 9 07:16:48.093608 containerd[1457]: time="2024-10-09T07:16:48.092926430Z" level=warning msg="cleaning up after shim disconnected" id=a1e60acda2222882ce35e4b960700447c149fe6b732def521d866379184da654 namespace=k8s.io Oct 9 07:16:48.093608 containerd[1457]: time="2024-10-09T07:16:48.092952661Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 07:16:48.110197 systemd[1]: Created slice kubepods-burstable-pod6e2f0e11_0f1c_419e_b192_8ef6ffe93a48.slice - libcontainer container kubepods-burstable-pod6e2f0e11_0f1c_419e_b192_8ef6ffe93a48.slice. Oct 9 07:16:48.120637 systemd[1]: Created slice kubepods-besteffort-podd2215775_413f_4eb4_8f14_bbae43713b31.slice - libcontainer container kubepods-besteffort-podd2215775_413f_4eb4_8f14_bbae43713b31.slice. Oct 9 07:16:48.169803 kubelet[2567]: I1009 07:16:48.169742 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6649\" (UniqueName: \"kubernetes.io/projected/6e2f0e11-0f1c-419e-b192-8ef6ffe93a48-kube-api-access-t6649\") pod \"coredns-6f6b679f8f-pckdt\" (UID: \"6e2f0e11-0f1c-419e-b192-8ef6ffe93a48\") " pod="kube-system/coredns-6f6b679f8f-pckdt" Oct 9 07:16:48.170395 kubelet[2567]: I1009 07:16:48.170372 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-678hj\" (UniqueName: \"kubernetes.io/projected/08f94e6c-424e-4778-9aa7-62a9cbd840ab-kube-api-access-678hj\") pod \"coredns-6f6b679f8f-dl7ww\" (UID: \"08f94e6c-424e-4778-9aa7-62a9cbd840ab\") " pod="kube-system/coredns-6f6b679f8f-dl7ww" Oct 9 07:16:48.170483 kubelet[2567]: I1009 07:16:48.170466 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6e2f0e11-0f1c-419e-b192-8ef6ffe93a48-config-volume\") pod \"coredns-6f6b679f8f-pckdt\" (UID: \"6e2f0e11-0f1c-419e-b192-8ef6ffe93a48\") " pod="kube-system/coredns-6f6b679f8f-pckdt" Oct 9 07:16:48.170577 kubelet[2567]: I1009 07:16:48.170561 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/08f94e6c-424e-4778-9aa7-62a9cbd840ab-config-volume\") pod \"coredns-6f6b679f8f-dl7ww\" (UID: \"08f94e6c-424e-4778-9aa7-62a9cbd840ab\") " pod="kube-system/coredns-6f6b679f8f-dl7ww" Oct 9 07:16:48.271801 kubelet[2567]: I1009 07:16:48.271747 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5h6rh\" (UniqueName: \"kubernetes.io/projected/d2215775-413f-4eb4-8f14-bbae43713b31-kube-api-access-5h6rh\") pod \"calico-kube-controllers-c4bcf989c-7nvgq\" (UID: \"d2215775-413f-4eb4-8f14-bbae43713b31\") " pod="calico-system/calico-kube-controllers-c4bcf989c-7nvgq" Oct 9 07:16:48.272102 kubelet[2567]: I1009 07:16:48.272086 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d2215775-413f-4eb4-8f14-bbae43713b31-tigera-ca-bundle\") pod \"calico-kube-controllers-c4bcf989c-7nvgq\" (UID: \"d2215775-413f-4eb4-8f14-bbae43713b31\") " pod="calico-system/calico-kube-controllers-c4bcf989c-7nvgq" Oct 9 07:16:48.433473 containerd[1457]: time="2024-10-09T07:16:48.433380037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-pckdt,Uid:6e2f0e11-0f1c-419e-b192-8ef6ffe93a48,Namespace:kube-system,Attempt:0,}" Oct 9 07:16:48.434791 containerd[1457]: time="2024-10-09T07:16:48.433893772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-dl7ww,Uid:08f94e6c-424e-4778-9aa7-62a9cbd840ab,Namespace:kube-system,Attempt:0,}" Oct 9 07:16:48.434791 containerd[1457]: time="2024-10-09T07:16:48.433741009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c4bcf989c-7nvgq,Uid:d2215775-413f-4eb4-8f14-bbae43713b31,Namespace:calico-system,Attempt:0,}" Oct 9 07:16:48.459225 systemd[1]: Created slice kubepods-besteffort-pod5460cbf3_3220_44d8_92e5_2d3cb02a666f.slice - libcontainer container kubepods-besteffort-pod5460cbf3_3220_44d8_92e5_2d3cb02a666f.slice. Oct 9 07:16:48.466705 containerd[1457]: time="2024-10-09T07:16:48.466558660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cpr56,Uid:5460cbf3-3220-44d8-92e5-2d3cb02a666f,Namespace:calico-system,Attempt:0,}" Oct 9 07:16:48.675996 containerd[1457]: time="2024-10-09T07:16:48.675842376Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\"" Oct 9 07:16:49.741928 containerd[1457]: time="2024-10-09T07:16:49.741854020Z" level=error msg="Failed to destroy network for sandbox \"c321338cfb2372acf78155d9ded94bb1ddc869b911bd0c92277895715d21688f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:16:49.743486 containerd[1457]: time="2024-10-09T07:16:49.742454680Z" level=error msg="Failed to destroy network for sandbox \"40ec5ec3486a4649fc04440b65de0f26c8102ca6372e49d2fa0988c122aa271d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:16:49.754793 containerd[1457]: time="2024-10-09T07:16:49.753389874Z" level=error msg="encountered an error cleaning up failed sandbox \"40ec5ec3486a4649fc04440b65de0f26c8102ca6372e49d2fa0988c122aa271d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:16:49.754793 containerd[1457]: time="2024-10-09T07:16:49.753468004Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c4bcf989c-7nvgq,Uid:d2215775-413f-4eb4-8f14-bbae43713b31,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"40ec5ec3486a4649fc04440b65de0f26c8102ca6372e49d2fa0988c122aa271d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:16:49.754793 containerd[1457]: time="2024-10-09T07:16:49.742498034Z" level=error msg="Failed to destroy network for sandbox \"324dcf5dd5358bfe635df0110f9652c27d0f872ee67a8cd3ca15349a7c1ee308\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:16:49.754793 containerd[1457]: time="2024-10-09T07:16:49.753811733Z" level=error msg="encountered an error cleaning up failed sandbox \"324dcf5dd5358bfe635df0110f9652c27d0f872ee67a8cd3ca15349a7c1ee308\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:16:49.754793 containerd[1457]: time="2024-10-09T07:16:49.753852340Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-pckdt,Uid:6e2f0e11-0f1c-419e-b192-8ef6ffe93a48,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"324dcf5dd5358bfe635df0110f9652c27d0f872ee67a8cd3ca15349a7c1ee308\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:16:49.754793 containerd[1457]: time="2024-10-09T07:16:49.754048496Z" level=error msg="encountered an error cleaning up failed sandbox \"c321338cfb2372acf78155d9ded94bb1ddc869b911bd0c92277895715d21688f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:16:49.754793 containerd[1457]: time="2024-10-09T07:16:49.754090276Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-dl7ww,Uid:08f94e6c-424e-4778-9aa7-62a9cbd840ab,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c321338cfb2372acf78155d9ded94bb1ddc869b911bd0c92277895715d21688f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:16:49.757995 containerd[1457]: time="2024-10-09T07:16:49.757550867Z" level=error msg="Failed to destroy network for sandbox \"1123b521fba55693557e7f6f72a87f9837e0f978126665e7e9e30a28d3c5313f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:16:49.758059 kubelet[2567]: E1009 07:16:49.755047 2567 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c321338cfb2372acf78155d9ded94bb1ddc869b911bd0c92277895715d21688f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:16:49.758059 kubelet[2567]: E1009 07:16:49.755130 2567 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c321338cfb2372acf78155d9ded94bb1ddc869b911bd0c92277895715d21688f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-dl7ww" Oct 9 07:16:49.758059 kubelet[2567]: E1009 07:16:49.755154 2567 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c321338cfb2372acf78155d9ded94bb1ddc869b911bd0c92277895715d21688f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-dl7ww" Oct 9 07:16:49.759195 kubelet[2567]: E1009 07:16:49.755207 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-dl7ww_kube-system(08f94e6c-424e-4778-9aa7-62a9cbd840ab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-dl7ww_kube-system(08f94e6c-424e-4778-9aa7-62a9cbd840ab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c321338cfb2372acf78155d9ded94bb1ddc869b911bd0c92277895715d21688f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-dl7ww" podUID="08f94e6c-424e-4778-9aa7-62a9cbd840ab" Oct 9 07:16:49.759195 kubelet[2567]: E1009 07:16:49.755502 2567 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40ec5ec3486a4649fc04440b65de0f26c8102ca6372e49d2fa0988c122aa271d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:16:49.759195 kubelet[2567]: E1009 07:16:49.755530 2567 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40ec5ec3486a4649fc04440b65de0f26c8102ca6372e49d2fa0988c122aa271d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c4bcf989c-7nvgq" Oct 9 07:16:49.759322 containerd[1457]: time="2024-10-09T07:16:49.758642458Z" level=error msg="encountered an error cleaning up failed sandbox \"1123b521fba55693557e7f6f72a87f9837e0f978126665e7e9e30a28d3c5313f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:16:49.759322 containerd[1457]: time="2024-10-09T07:16:49.758762538Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cpr56,Uid:5460cbf3-3220-44d8-92e5-2d3cb02a666f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1123b521fba55693557e7f6f72a87f9837e0f978126665e7e9e30a28d3c5313f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:16:49.759382 kubelet[2567]: E1009 07:16:49.755546 2567 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40ec5ec3486a4649fc04440b65de0f26c8102ca6372e49d2fa0988c122aa271d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c4bcf989c-7nvgq" Oct 9 07:16:49.759382 kubelet[2567]: E1009 07:16:49.755576 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-c4bcf989c-7nvgq_calico-system(d2215775-413f-4eb4-8f14-bbae43713b31)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-c4bcf989c-7nvgq_calico-system(d2215775-413f-4eb4-8f14-bbae43713b31)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"40ec5ec3486a4649fc04440b65de0f26c8102ca6372e49d2fa0988c122aa271d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-c4bcf989c-7nvgq" podUID="d2215775-413f-4eb4-8f14-bbae43713b31" Oct 9 07:16:49.759382 kubelet[2567]: E1009 07:16:49.755615 2567 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"324dcf5dd5358bfe635df0110f9652c27d0f872ee67a8cd3ca15349a7c1ee308\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:16:49.759500 kubelet[2567]: E1009 07:16:49.755637 2567 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"324dcf5dd5358bfe635df0110f9652c27d0f872ee67a8cd3ca15349a7c1ee308\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-pckdt" Oct 9 07:16:49.759500 kubelet[2567]: E1009 07:16:49.755653 2567 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"324dcf5dd5358bfe635df0110f9652c27d0f872ee67a8cd3ca15349a7c1ee308\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-pckdt" Oct 9 07:16:49.759500 kubelet[2567]: E1009 07:16:49.755679 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-pckdt_kube-system(6e2f0e11-0f1c-419e-b192-8ef6ffe93a48)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-pckdt_kube-system(6e2f0e11-0f1c-419e-b192-8ef6ffe93a48)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"324dcf5dd5358bfe635df0110f9652c27d0f872ee67a8cd3ca15349a7c1ee308\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-pckdt" podUID="6e2f0e11-0f1c-419e-b192-8ef6ffe93a48" Oct 9 07:16:49.759607 kubelet[2567]: E1009 07:16:49.758956 2567 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1123b521fba55693557e7f6f72a87f9837e0f978126665e7e9e30a28d3c5313f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:16:49.759607 kubelet[2567]: E1009 07:16:49.759061 2567 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1123b521fba55693557e7f6f72a87f9837e0f978126665e7e9e30a28d3c5313f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cpr56" Oct 9 07:16:49.759607 kubelet[2567]: E1009 07:16:49.759085 2567 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1123b521fba55693557e7f6f72a87f9837e0f978126665e7e9e30a28d3c5313f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cpr56" Oct 9 07:16:49.760223 kubelet[2567]: E1009 07:16:49.760115 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-cpr56_calico-system(5460cbf3-3220-44d8-92e5-2d3cb02a666f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-cpr56_calico-system(5460cbf3-3220-44d8-92e5-2d3cb02a666f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1123b521fba55693557e7f6f72a87f9837e0f978126665e7e9e30a28d3c5313f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-cpr56" podUID="5460cbf3-3220-44d8-92e5-2d3cb02a666f" Oct 9 07:16:50.372556 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-40ec5ec3486a4649fc04440b65de0f26c8102ca6372e49d2fa0988c122aa271d-shm.mount: Deactivated successfully. Oct 9 07:16:50.373602 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c321338cfb2372acf78155d9ded94bb1ddc869b911bd0c92277895715d21688f-shm.mount: Deactivated successfully. Oct 9 07:16:50.374273 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1123b521fba55693557e7f6f72a87f9837e0f978126665e7e9e30a28d3c5313f-shm.mount: Deactivated successfully. Oct 9 07:16:50.374484 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-324dcf5dd5358bfe635df0110f9652c27d0f872ee67a8cd3ca15349a7c1ee308-shm.mount: Deactivated successfully. Oct 9 07:16:50.684811 kubelet[2567]: I1009 07:16:50.684278 2567 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c321338cfb2372acf78155d9ded94bb1ddc869b911bd0c92277895715d21688f" Oct 9 07:16:50.705070 containerd[1457]: time="2024-10-09T07:16:50.704951096Z" level=info msg="StopPodSandbox for \"c321338cfb2372acf78155d9ded94bb1ddc869b911bd0c92277895715d21688f\"" Oct 9 07:16:50.705583 containerd[1457]: time="2024-10-09T07:16:50.705532369Z" level=info msg="Ensure that sandbox c321338cfb2372acf78155d9ded94bb1ddc869b911bd0c92277895715d21688f in task-service has been cleanup successfully" Oct 9 07:16:50.739209 kubelet[2567]: I1009 07:16:50.739158 2567 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40ec5ec3486a4649fc04440b65de0f26c8102ca6372e49d2fa0988c122aa271d" Oct 9 07:16:50.741224 containerd[1457]: time="2024-10-09T07:16:50.741064585Z" level=info msg="StopPodSandbox for \"40ec5ec3486a4649fc04440b65de0f26c8102ca6372e49d2fa0988c122aa271d\"" Oct 9 07:16:50.741570 containerd[1457]: time="2024-10-09T07:16:50.741511321Z" level=info msg="Ensure that sandbox 40ec5ec3486a4649fc04440b65de0f26c8102ca6372e49d2fa0988c122aa271d in task-service has been cleanup successfully" Oct 9 07:16:50.745581 kubelet[2567]: I1009 07:16:50.745420 2567 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="324dcf5dd5358bfe635df0110f9652c27d0f872ee67a8cd3ca15349a7c1ee308" Oct 9 07:16:50.748315 containerd[1457]: time="2024-10-09T07:16:50.748281005Z" level=info msg="StopPodSandbox for \"324dcf5dd5358bfe635df0110f9652c27d0f872ee67a8cd3ca15349a7c1ee308\"" Oct 9 07:16:50.750456 containerd[1457]: time="2024-10-09T07:16:50.750206091Z" level=info msg="Ensure that sandbox 324dcf5dd5358bfe635df0110f9652c27d0f872ee67a8cd3ca15349a7c1ee308 in task-service has been cleanup successfully" Oct 9 07:16:50.761334 kubelet[2567]: I1009 07:16:50.761144 2567 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1123b521fba55693557e7f6f72a87f9837e0f978126665e7e9e30a28d3c5313f" Oct 9 07:16:50.762782 containerd[1457]: time="2024-10-09T07:16:50.762294972Z" level=info msg="StopPodSandbox for \"1123b521fba55693557e7f6f72a87f9837e0f978126665e7e9e30a28d3c5313f\"" Oct 9 07:16:50.762782 containerd[1457]: time="2024-10-09T07:16:50.762534642Z" level=info msg="Ensure that sandbox 1123b521fba55693557e7f6f72a87f9837e0f978126665e7e9e30a28d3c5313f in task-service has been cleanup successfully" Oct 9 07:16:50.842413 containerd[1457]: time="2024-10-09T07:16:50.842345101Z" level=error msg="StopPodSandbox for \"c321338cfb2372acf78155d9ded94bb1ddc869b911bd0c92277895715d21688f\" failed" error="failed to destroy network for sandbox \"c321338cfb2372acf78155d9ded94bb1ddc869b911bd0c92277895715d21688f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:16:50.843000 kubelet[2567]: E1009 07:16:50.842949 2567 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c321338cfb2372acf78155d9ded94bb1ddc869b911bd0c92277895715d21688f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c321338cfb2372acf78155d9ded94bb1ddc869b911bd0c92277895715d21688f" Oct 9 07:16:50.843144 kubelet[2567]: E1009 07:16:50.843049 2567 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c321338cfb2372acf78155d9ded94bb1ddc869b911bd0c92277895715d21688f"} Oct 9 07:16:50.843200 kubelet[2567]: E1009 07:16:50.843137 2567 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"08f94e6c-424e-4778-9aa7-62a9cbd840ab\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c321338cfb2372acf78155d9ded94bb1ddc869b911bd0c92277895715d21688f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 07:16:50.843200 kubelet[2567]: E1009 07:16:50.843173 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"08f94e6c-424e-4778-9aa7-62a9cbd840ab\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c321338cfb2372acf78155d9ded94bb1ddc869b911bd0c92277895715d21688f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-dl7ww" podUID="08f94e6c-424e-4778-9aa7-62a9cbd840ab" Oct 9 07:16:50.847838 containerd[1457]: time="2024-10-09T07:16:50.847780791Z" level=error msg="StopPodSandbox for \"1123b521fba55693557e7f6f72a87f9837e0f978126665e7e9e30a28d3c5313f\" failed" error="failed to destroy network for sandbox \"1123b521fba55693557e7f6f72a87f9837e0f978126665e7e9e30a28d3c5313f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:16:50.848369 kubelet[2567]: E1009 07:16:50.848072 2567 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1123b521fba55693557e7f6f72a87f9837e0f978126665e7e9e30a28d3c5313f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1123b521fba55693557e7f6f72a87f9837e0f978126665e7e9e30a28d3c5313f" Oct 9 07:16:50.848369 kubelet[2567]: E1009 07:16:50.848131 2567 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1123b521fba55693557e7f6f72a87f9837e0f978126665e7e9e30a28d3c5313f"} Oct 9 07:16:50.848369 kubelet[2567]: E1009 07:16:50.848176 2567 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5460cbf3-3220-44d8-92e5-2d3cb02a666f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1123b521fba55693557e7f6f72a87f9837e0f978126665e7e9e30a28d3c5313f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 07:16:50.848369 kubelet[2567]: E1009 07:16:50.848209 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5460cbf3-3220-44d8-92e5-2d3cb02a666f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1123b521fba55693557e7f6f72a87f9837e0f978126665e7e9e30a28d3c5313f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-cpr56" podUID="5460cbf3-3220-44d8-92e5-2d3cb02a666f" Oct 9 07:16:50.849148 containerd[1457]: time="2024-10-09T07:16:50.849112912Z" level=error msg="StopPodSandbox for \"40ec5ec3486a4649fc04440b65de0f26c8102ca6372e49d2fa0988c122aa271d\" failed" error="failed to destroy network for sandbox \"40ec5ec3486a4649fc04440b65de0f26c8102ca6372e49d2fa0988c122aa271d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:16:50.849292 kubelet[2567]: E1009 07:16:50.849257 2567 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"40ec5ec3486a4649fc04440b65de0f26c8102ca6372e49d2fa0988c122aa271d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="40ec5ec3486a4649fc04440b65de0f26c8102ca6372e49d2fa0988c122aa271d" Oct 9 07:16:50.849339 kubelet[2567]: E1009 07:16:50.849304 2567 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"40ec5ec3486a4649fc04440b65de0f26c8102ca6372e49d2fa0988c122aa271d"} Oct 9 07:16:50.849378 kubelet[2567]: E1009 07:16:50.849338 2567 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d2215775-413f-4eb4-8f14-bbae43713b31\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"40ec5ec3486a4649fc04440b65de0f26c8102ca6372e49d2fa0988c122aa271d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 07:16:50.849378 kubelet[2567]: E1009 07:16:50.849363 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d2215775-413f-4eb4-8f14-bbae43713b31\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"40ec5ec3486a4649fc04440b65de0f26c8102ca6372e49d2fa0988c122aa271d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-c4bcf989c-7nvgq" podUID="d2215775-413f-4eb4-8f14-bbae43713b31" Oct 9 07:16:50.865082 containerd[1457]: time="2024-10-09T07:16:50.854556487Z" level=error msg="StopPodSandbox for \"324dcf5dd5358bfe635df0110f9652c27d0f872ee67a8cd3ca15349a7c1ee308\" failed" error="failed to destroy network for sandbox \"324dcf5dd5358bfe635df0110f9652c27d0f872ee67a8cd3ca15349a7c1ee308\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:16:50.865951 kubelet[2567]: E1009 07:16:50.864981 2567 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"324dcf5dd5358bfe635df0110f9652c27d0f872ee67a8cd3ca15349a7c1ee308\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="324dcf5dd5358bfe635df0110f9652c27d0f872ee67a8cd3ca15349a7c1ee308" Oct 9 07:16:50.865951 kubelet[2567]: E1009 07:16:50.865060 2567 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"324dcf5dd5358bfe635df0110f9652c27d0f872ee67a8cd3ca15349a7c1ee308"} Oct 9 07:16:50.865951 kubelet[2567]: E1009 07:16:50.865105 2567 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6e2f0e11-0f1c-419e-b192-8ef6ffe93a48\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"324dcf5dd5358bfe635df0110f9652c27d0f872ee67a8cd3ca15349a7c1ee308\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 07:16:50.865951 kubelet[2567]: E1009 07:16:50.865133 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6e2f0e11-0f1c-419e-b192-8ef6ffe93a48\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"324dcf5dd5358bfe635df0110f9652c27d0f872ee67a8cd3ca15349a7c1ee308\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-pckdt" podUID="6e2f0e11-0f1c-419e-b192-8ef6ffe93a48" Oct 9 07:16:52.862083 kubelet[2567]: I1009 07:16:52.859985 2567 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 9 07:16:57.211351 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3930408006.mount: Deactivated successfully. Oct 9 07:16:57.317079 containerd[1457]: time="2024-10-09T07:16:57.316143884Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.1: active requests=0, bytes read=117873564" Oct 9 07:16:57.317079 containerd[1457]: time="2024-10-09T07:16:57.307894773Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:16:57.320695 containerd[1457]: time="2024-10-09T07:16:57.320646056Z" level=info msg="ImageCreate event name:\"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:16:57.322061 containerd[1457]: time="2024-10-09T07:16:57.321984870Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:16:57.323418 containerd[1457]: time="2024-10-09T07:16:57.323344895Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.1\" with image id \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\", size \"117873426\" in 8.647418097s" Oct 9 07:16:57.323513 containerd[1457]: time="2024-10-09T07:16:57.323443553Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\" returns image reference \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\"" Oct 9 07:16:57.350713 containerd[1457]: time="2024-10-09T07:16:57.350583002Z" level=info msg="CreateContainer within sandbox \"0756c3a8ea503758084127275a1eaaa59160e1ed150b662ecf62052ce85f8a7b\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 9 07:16:57.536738 containerd[1457]: time="2024-10-09T07:16:57.536073530Z" level=info msg="CreateContainer within sandbox \"0756c3a8ea503758084127275a1eaaa59160e1ed150b662ecf62052ce85f8a7b\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e26d3cb097854562d3e448bcca77161b54717202f0bd14286184bfa183370003\"" Oct 9 07:16:57.540306 containerd[1457]: time="2024-10-09T07:16:57.538728035Z" level=info msg="StartContainer for \"e26d3cb097854562d3e448bcca77161b54717202f0bd14286184bfa183370003\"" Oct 9 07:16:57.708238 systemd[1]: Started cri-containerd-e26d3cb097854562d3e448bcca77161b54717202f0bd14286184bfa183370003.scope - libcontainer container e26d3cb097854562d3e448bcca77161b54717202f0bd14286184bfa183370003. Oct 9 07:16:57.808612 containerd[1457]: time="2024-10-09T07:16:57.808376495Z" level=info msg="StartContainer for \"e26d3cb097854562d3e448bcca77161b54717202f0bd14286184bfa183370003\" returns successfully" Oct 9 07:16:57.835755 kubelet[2567]: I1009 07:16:57.835178 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-ptndd" podStartSLOduration=1.815226459 podStartE2EDuration="27.835158902s" podCreationTimestamp="2024-10-09 07:16:30 +0000 UTC" firstStartedPulling="2024-10-09 07:16:31.304884912 +0000 UTC m=+14.101450352" lastFinishedPulling="2024-10-09 07:16:57.324817345 +0000 UTC m=+40.121382795" observedRunningTime="2024-10-09 07:16:57.833970304 +0000 UTC m=+40.630535744" watchObservedRunningTime="2024-10-09 07:16:57.835158902 +0000 UTC m=+40.631724342" Oct 9 07:16:58.641629 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 9 07:16:58.642684 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 9 07:17:01.443089 containerd[1457]: time="2024-10-09T07:17:01.442103467Z" level=info msg="StopPodSandbox for \"c321338cfb2372acf78155d9ded94bb1ddc869b911bd0c92277895715d21688f\"" Oct 9 07:17:01.627199 kernel: bpftool[3691]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Oct 9 07:17:02.113397 systemd-networkd[1373]: vxlan.calico: Link UP Oct 9 07:17:02.113408 systemd-networkd[1373]: vxlan.calico: Gained carrier Oct 9 07:17:02.176053 containerd[1457]: 2024-10-09 07:17:01.619 [INFO][3667] k8s.go 608: Cleaning up netns ContainerID="c321338cfb2372acf78155d9ded94bb1ddc869b911bd0c92277895715d21688f" Oct 9 07:17:02.176053 containerd[1457]: 2024-10-09 07:17:01.620 [INFO][3667] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="c321338cfb2372acf78155d9ded94bb1ddc869b911bd0c92277895715d21688f" iface="eth0" netns="/var/run/netns/cni-ebd5f4fc-dc0f-9197-cfeb-e7c238b2bcaf" Oct 9 07:17:02.176053 containerd[1457]: 2024-10-09 07:17:01.620 [INFO][3667] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="c321338cfb2372acf78155d9ded94bb1ddc869b911bd0c92277895715d21688f" iface="eth0" netns="/var/run/netns/cni-ebd5f4fc-dc0f-9197-cfeb-e7c238b2bcaf" Oct 9 07:17:02.176053 containerd[1457]: 2024-10-09 07:17:01.622 [INFO][3667] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="c321338cfb2372acf78155d9ded94bb1ddc869b911bd0c92277895715d21688f" iface="eth0" netns="/var/run/netns/cni-ebd5f4fc-dc0f-9197-cfeb-e7c238b2bcaf" Oct 9 07:17:02.176053 containerd[1457]: 2024-10-09 07:17:01.622 [INFO][3667] k8s.go 615: Releasing IP address(es) ContainerID="c321338cfb2372acf78155d9ded94bb1ddc869b911bd0c92277895715d21688f" Oct 9 07:17:02.176053 containerd[1457]: 2024-10-09 07:17:01.622 [INFO][3667] utils.go 188: Calico CNI releasing IP address ContainerID="c321338cfb2372acf78155d9ded94bb1ddc869b911bd0c92277895715d21688f" Oct 9 07:17:02.176053 containerd[1457]: 2024-10-09 07:17:02.117 [INFO][3695] ipam_plugin.go 417: Releasing address using handleID ContainerID="c321338cfb2372acf78155d9ded94bb1ddc869b911bd0c92277895715d21688f" HandleID="k8s-pod-network.c321338cfb2372acf78155d9ded94bb1ddc869b911bd0c92277895715d21688f" Workload="ci--3975--2--2--4--dcc5873578.novalocal-k8s-coredns--6f6b679f8f--dl7ww-eth0" Oct 9 07:17:02.176053 containerd[1457]: 2024-10-09 07:17:02.119 [INFO][3695] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:17:02.176053 containerd[1457]: 2024-10-09 07:17:02.120 [INFO][3695] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:17:02.176053 containerd[1457]: 2024-10-09 07:17:02.151 [WARNING][3695] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="c321338cfb2372acf78155d9ded94bb1ddc869b911bd0c92277895715d21688f" HandleID="k8s-pod-network.c321338cfb2372acf78155d9ded94bb1ddc869b911bd0c92277895715d21688f" Workload="ci--3975--2--2--4--dcc5873578.novalocal-k8s-coredns--6f6b679f8f--dl7ww-eth0" Oct 9 07:17:02.176053 containerd[1457]: 2024-10-09 07:17:02.151 [INFO][3695] ipam_plugin.go 445: Releasing address using workloadID ContainerID="c321338cfb2372acf78155d9ded94bb1ddc869b911bd0c92277895715d21688f" HandleID="k8s-pod-network.c321338cfb2372acf78155d9ded94bb1ddc869b911bd0c92277895715d21688f" Workload="ci--3975--2--2--4--dcc5873578.novalocal-k8s-coredns--6f6b679f8f--dl7ww-eth0" Oct 9 07:17:02.176053 containerd[1457]: 2024-10-09 07:17:02.154 [INFO][3695] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:17:02.176053 containerd[1457]: 2024-10-09 07:17:02.160 [INFO][3667] k8s.go 621: Teardown processing complete. ContainerID="c321338cfb2372acf78155d9ded94bb1ddc869b911bd0c92277895715d21688f" Oct 9 07:17:02.183335 containerd[1457]: time="2024-10-09T07:17:02.176521521Z" level=info msg="TearDown network for sandbox \"c321338cfb2372acf78155d9ded94bb1ddc869b911bd0c92277895715d21688f\" successfully" Oct 9 07:17:02.183335 containerd[1457]: time="2024-10-09T07:17:02.176608137Z" level=info msg="StopPodSandbox for \"c321338cfb2372acf78155d9ded94bb1ddc869b911bd0c92277895715d21688f\" returns successfully" Oct 9 07:17:02.178315 systemd[1]: run-netns-cni\x2debd5f4fc\x2ddc0f\x2d9197\x2dcfeb\x2de7c238b2bcaf.mount: Deactivated successfully. Oct 9 07:17:02.353702 containerd[1457]: time="2024-10-09T07:17:02.353634401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-dl7ww,Uid:08f94e6c-424e-4778-9aa7-62a9cbd840ab,Namespace:kube-system,Attempt:1,}" Oct 9 07:17:02.448263 containerd[1457]: time="2024-10-09T07:17:02.445401274Z" level=info msg="StopPodSandbox for \"324dcf5dd5358bfe635df0110f9652c27d0f872ee67a8cd3ca15349a7c1ee308\"" Oct 9 07:17:02.734398 containerd[1457]: 2024-10-09 07:17:02.633 [INFO][3760] k8s.go 608: Cleaning up netns ContainerID="324dcf5dd5358bfe635df0110f9652c27d0f872ee67a8cd3ca15349a7c1ee308" Oct 9 07:17:02.734398 containerd[1457]: 2024-10-09 07:17:02.633 [INFO][3760] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="324dcf5dd5358bfe635df0110f9652c27d0f872ee67a8cd3ca15349a7c1ee308" iface="eth0" netns="/var/run/netns/cni-3b1f1334-b422-a7e0-d0ae-7c3df015072d" Oct 9 07:17:02.734398 containerd[1457]: 2024-10-09 07:17:02.634 [INFO][3760] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="324dcf5dd5358bfe635df0110f9652c27d0f872ee67a8cd3ca15349a7c1ee308" iface="eth0" netns="/var/run/netns/cni-3b1f1334-b422-a7e0-d0ae-7c3df015072d" Oct 9 07:17:02.734398 containerd[1457]: 2024-10-09 07:17:02.638 [INFO][3760] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="324dcf5dd5358bfe635df0110f9652c27d0f872ee67a8cd3ca15349a7c1ee308" iface="eth0" netns="/var/run/netns/cni-3b1f1334-b422-a7e0-d0ae-7c3df015072d" Oct 9 07:17:02.734398 containerd[1457]: 2024-10-09 07:17:02.638 [INFO][3760] k8s.go 615: Releasing IP address(es) ContainerID="324dcf5dd5358bfe635df0110f9652c27d0f872ee67a8cd3ca15349a7c1ee308" Oct 9 07:17:02.734398 containerd[1457]: 2024-10-09 07:17:02.638 [INFO][3760] utils.go 188: Calico CNI releasing IP address ContainerID="324dcf5dd5358bfe635df0110f9652c27d0f872ee67a8cd3ca15349a7c1ee308" Oct 9 07:17:02.734398 containerd[1457]: 2024-10-09 07:17:02.693 [INFO][3779] ipam_plugin.go 417: Releasing address using handleID ContainerID="324dcf5dd5358bfe635df0110f9652c27d0f872ee67a8cd3ca15349a7c1ee308" HandleID="k8s-pod-network.324dcf5dd5358bfe635df0110f9652c27d0f872ee67a8cd3ca15349a7c1ee308" Workload="ci--3975--2--2--4--dcc5873578.novalocal-k8s-coredns--6f6b679f8f--pckdt-eth0" Oct 9 07:17:02.734398 containerd[1457]: 2024-10-09 07:17:02.695 [INFO][3779] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:17:02.734398 containerd[1457]: 2024-10-09 07:17:02.695 [INFO][3779] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:17:02.734398 containerd[1457]: 2024-10-09 07:17:02.708 [WARNING][3779] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="324dcf5dd5358bfe635df0110f9652c27d0f872ee67a8cd3ca15349a7c1ee308" HandleID="k8s-pod-network.324dcf5dd5358bfe635df0110f9652c27d0f872ee67a8cd3ca15349a7c1ee308" Workload="ci--3975--2--2--4--dcc5873578.novalocal-k8s-coredns--6f6b679f8f--pckdt-eth0" Oct 9 07:17:02.734398 containerd[1457]: 2024-10-09 07:17:02.709 [INFO][3779] ipam_plugin.go 445: Releasing address using workloadID ContainerID="324dcf5dd5358bfe635df0110f9652c27d0f872ee67a8cd3ca15349a7c1ee308" HandleID="k8s-pod-network.324dcf5dd5358bfe635df0110f9652c27d0f872ee67a8cd3ca15349a7c1ee308" Workload="ci--3975--2--2--4--dcc5873578.novalocal-k8s-coredns--6f6b679f8f--pckdt-eth0" Oct 9 07:17:02.734398 containerd[1457]: 2024-10-09 07:17:02.711 [INFO][3779] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:17:02.734398 containerd[1457]: 2024-10-09 07:17:02.727 [INFO][3760] k8s.go 621: Teardown processing complete. ContainerID="324dcf5dd5358bfe635df0110f9652c27d0f872ee67a8cd3ca15349a7c1ee308" Oct 9 07:17:02.738692 containerd[1457]: time="2024-10-09T07:17:02.738599284Z" level=info msg="TearDown network for sandbox \"324dcf5dd5358bfe635df0110f9652c27d0f872ee67a8cd3ca15349a7c1ee308\" successfully" Oct 9 07:17:02.738796 containerd[1457]: time="2024-10-09T07:17:02.738778585Z" level=info msg="StopPodSandbox for \"324dcf5dd5358bfe635df0110f9652c27d0f872ee67a8cd3ca15349a7c1ee308\" returns successfully" Oct 9 07:17:02.740475 containerd[1457]: time="2024-10-09T07:17:02.740433345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-pckdt,Uid:6e2f0e11-0f1c-419e-b192-8ef6ffe93a48,Namespace:kube-system,Attempt:1,}" Oct 9 07:17:02.743211 systemd[1]: run-netns-cni\x2d3b1f1334\x2db422\x2da7e0\x2dd0ae\x2d7c3df015072d.mount: Deactivated successfully. Oct 9 07:17:02.947538 systemd-networkd[1373]: calie064d81dc2b: Link UP Oct 9 07:17:02.947721 systemd-networkd[1373]: calie064d81dc2b: Gained carrier Oct 9 07:17:02.987128 containerd[1457]: 2024-10-09 07:17:02.651 [INFO][3764] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975--2--2--4--dcc5873578.novalocal-k8s-coredns--6f6b679f8f--dl7ww-eth0 coredns-6f6b679f8f- kube-system 08f94e6c-424e-4778-9aa7-62a9cbd840ab 717 0 2024-10-09 07:16:23 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3975-2-2-4-dcc5873578.novalocal coredns-6f6b679f8f-dl7ww eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie064d81dc2b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="2dca9087ee76c0afc90eba00bb617b51e1c7375e77c95feee826e4df6c5caae3" Namespace="kube-system" Pod="coredns-6f6b679f8f-dl7ww" WorkloadEndpoint="ci--3975--2--2--4--dcc5873578.novalocal-k8s-coredns--6f6b679f8f--dl7ww-" Oct 9 07:17:02.987128 containerd[1457]: 2024-10-09 07:17:02.651 [INFO][3764] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2dca9087ee76c0afc90eba00bb617b51e1c7375e77c95feee826e4df6c5caae3" Namespace="kube-system" Pod="coredns-6f6b679f8f-dl7ww" WorkloadEndpoint="ci--3975--2--2--4--dcc5873578.novalocal-k8s-coredns--6f6b679f8f--dl7ww-eth0" Oct 9 07:17:02.987128 containerd[1457]: 2024-10-09 07:17:02.767 [INFO][3788] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2dca9087ee76c0afc90eba00bb617b51e1c7375e77c95feee826e4df6c5caae3" HandleID="k8s-pod-network.2dca9087ee76c0afc90eba00bb617b51e1c7375e77c95feee826e4df6c5caae3" Workload="ci--3975--2--2--4--dcc5873578.novalocal-k8s-coredns--6f6b679f8f--dl7ww-eth0" Oct 9 07:17:02.987128 containerd[1457]: 2024-10-09 07:17:02.791 [INFO][3788] ipam_plugin.go 270: Auto assigning IP ContainerID="2dca9087ee76c0afc90eba00bb617b51e1c7375e77c95feee826e4df6c5caae3" HandleID="k8s-pod-network.2dca9087ee76c0afc90eba00bb617b51e1c7375e77c95feee826e4df6c5caae3" Workload="ci--3975--2--2--4--dcc5873578.novalocal-k8s-coredns--6f6b679f8f--dl7ww-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000050450), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3975-2-2-4-dcc5873578.novalocal", "pod":"coredns-6f6b679f8f-dl7ww", "timestamp":"2024-10-09 07:17:02.767602111 +0000 UTC"}, Hostname:"ci-3975-2-2-4-dcc5873578.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 07:17:02.987128 containerd[1457]: 2024-10-09 07:17:02.793 [INFO][3788] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:17:02.987128 containerd[1457]: 2024-10-09 07:17:02.794 [INFO][3788] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:17:02.987128 containerd[1457]: 2024-10-09 07:17:02.794 [INFO][3788] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975-2-2-4-dcc5873578.novalocal' Oct 9 07:17:02.987128 containerd[1457]: 2024-10-09 07:17:02.805 [INFO][3788] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2dca9087ee76c0afc90eba00bb617b51e1c7375e77c95feee826e4df6c5caae3" host="ci-3975-2-2-4-dcc5873578.novalocal" Oct 9 07:17:02.987128 containerd[1457]: 2024-10-09 07:17:02.847 [INFO][3788] ipam.go 372: Looking up existing affinities for host host="ci-3975-2-2-4-dcc5873578.novalocal" Oct 9 07:17:02.987128 containerd[1457]: 2024-10-09 07:17:02.876 [INFO][3788] ipam.go 489: Trying affinity for 192.168.101.0/26 host="ci-3975-2-2-4-dcc5873578.novalocal" Oct 9 07:17:02.987128 containerd[1457]: 2024-10-09 07:17:02.881 [INFO][3788] ipam.go 155: Attempting to load block cidr=192.168.101.0/26 host="ci-3975-2-2-4-dcc5873578.novalocal" Oct 9 07:17:02.987128 containerd[1457]: 2024-10-09 07:17:02.888 [INFO][3788] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.101.0/26 host="ci-3975-2-2-4-dcc5873578.novalocal" Oct 9 07:17:02.987128 containerd[1457]: 2024-10-09 07:17:02.889 [INFO][3788] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.101.0/26 handle="k8s-pod-network.2dca9087ee76c0afc90eba00bb617b51e1c7375e77c95feee826e4df6c5caae3" host="ci-3975-2-2-4-dcc5873578.novalocal" Oct 9 07:17:02.987128 containerd[1457]: 2024-10-09 07:17:02.892 [INFO][3788] ipam.go 1685: Creating new handle: k8s-pod-network.2dca9087ee76c0afc90eba00bb617b51e1c7375e77c95feee826e4df6c5caae3 Oct 9 07:17:02.987128 containerd[1457]: 2024-10-09 07:17:02.908 [INFO][3788] ipam.go 1203: Writing block in order to claim IPs block=192.168.101.0/26 handle="k8s-pod-network.2dca9087ee76c0afc90eba00bb617b51e1c7375e77c95feee826e4df6c5caae3" host="ci-3975-2-2-4-dcc5873578.novalocal" Oct 9 07:17:02.987128 containerd[1457]: 2024-10-09 07:17:02.933 [INFO][3788] ipam.go 1216: Successfully claimed IPs: [192.168.101.1/26] block=192.168.101.0/26 handle="k8s-pod-network.2dca9087ee76c0afc90eba00bb617b51e1c7375e77c95feee826e4df6c5caae3" host="ci-3975-2-2-4-dcc5873578.novalocal" Oct 9 07:17:02.987128 containerd[1457]: 2024-10-09 07:17:02.933 [INFO][3788] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.101.1/26] handle="k8s-pod-network.2dca9087ee76c0afc90eba00bb617b51e1c7375e77c95feee826e4df6c5caae3" host="ci-3975-2-2-4-dcc5873578.novalocal" Oct 9 07:17:02.987128 containerd[1457]: 2024-10-09 07:17:02.934 [INFO][3788] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:17:02.987128 containerd[1457]: 2024-10-09 07:17:02.934 [INFO][3788] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.101.1/26] IPv6=[] ContainerID="2dca9087ee76c0afc90eba00bb617b51e1c7375e77c95feee826e4df6c5caae3" HandleID="k8s-pod-network.2dca9087ee76c0afc90eba00bb617b51e1c7375e77c95feee826e4df6c5caae3" Workload="ci--3975--2--2--4--dcc5873578.novalocal-k8s-coredns--6f6b679f8f--dl7ww-eth0" Oct 9 07:17:02.988967 containerd[1457]: 2024-10-09 07:17:02.938 [INFO][3764] k8s.go 386: Populated endpoint ContainerID="2dca9087ee76c0afc90eba00bb617b51e1c7375e77c95feee826e4df6c5caae3" Namespace="kube-system" Pod="coredns-6f6b679f8f-dl7ww" WorkloadEndpoint="ci--3975--2--2--4--dcc5873578.novalocal-k8s-coredns--6f6b679f8f--dl7ww-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--4--dcc5873578.novalocal-k8s-coredns--6f6b679f8f--dl7ww-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"08f94e6c-424e-4778-9aa7-62a9cbd840ab", ResourceVersion:"717", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 16, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-4-dcc5873578.novalocal", ContainerID:"", Pod:"coredns-6f6b679f8f-dl7ww", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.101.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie064d81dc2b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:17:02.988967 containerd[1457]: 2024-10-09 07:17:02.941 [INFO][3764] k8s.go 387: Calico CNI using IPs: [192.168.101.1/32] ContainerID="2dca9087ee76c0afc90eba00bb617b51e1c7375e77c95feee826e4df6c5caae3" Namespace="kube-system" Pod="coredns-6f6b679f8f-dl7ww" WorkloadEndpoint="ci--3975--2--2--4--dcc5873578.novalocal-k8s-coredns--6f6b679f8f--dl7ww-eth0" Oct 9 07:17:02.988967 containerd[1457]: 2024-10-09 07:17:02.942 [INFO][3764] dataplane_linux.go 68: Setting the host side veth name to calie064d81dc2b ContainerID="2dca9087ee76c0afc90eba00bb617b51e1c7375e77c95feee826e4df6c5caae3" Namespace="kube-system" Pod="coredns-6f6b679f8f-dl7ww" WorkloadEndpoint="ci--3975--2--2--4--dcc5873578.novalocal-k8s-coredns--6f6b679f8f--dl7ww-eth0" Oct 9 07:17:02.988967 containerd[1457]: 2024-10-09 07:17:02.947 [INFO][3764] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="2dca9087ee76c0afc90eba00bb617b51e1c7375e77c95feee826e4df6c5caae3" Namespace="kube-system" Pod="coredns-6f6b679f8f-dl7ww" WorkloadEndpoint="ci--3975--2--2--4--dcc5873578.novalocal-k8s-coredns--6f6b679f8f--dl7ww-eth0" Oct 9 07:17:02.988967 containerd[1457]: 2024-10-09 07:17:02.948 [INFO][3764] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2dca9087ee76c0afc90eba00bb617b51e1c7375e77c95feee826e4df6c5caae3" Namespace="kube-system" Pod="coredns-6f6b679f8f-dl7ww" WorkloadEndpoint="ci--3975--2--2--4--dcc5873578.novalocal-k8s-coredns--6f6b679f8f--dl7ww-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--4--dcc5873578.novalocal-k8s-coredns--6f6b679f8f--dl7ww-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"08f94e6c-424e-4778-9aa7-62a9cbd840ab", ResourceVersion:"717", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 16, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-4-dcc5873578.novalocal", ContainerID:"2dca9087ee76c0afc90eba00bb617b51e1c7375e77c95feee826e4df6c5caae3", Pod:"coredns-6f6b679f8f-dl7ww", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.101.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie064d81dc2b", MAC:"9a:a5:cf:70:0e:c6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:17:02.988967 containerd[1457]: 2024-10-09 07:17:02.982 [INFO][3764] k8s.go 500: Wrote updated endpoint to datastore ContainerID="2dca9087ee76c0afc90eba00bb617b51e1c7375e77c95feee826e4df6c5caae3" Namespace="kube-system" Pod="coredns-6f6b679f8f-dl7ww" WorkloadEndpoint="ci--3975--2--2--4--dcc5873578.novalocal-k8s-coredns--6f6b679f8f--dl7ww-eth0" Oct 9 07:17:03.072120 containerd[1457]: time="2024-10-09T07:17:03.071493341Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:17:03.072120 containerd[1457]: time="2024-10-09T07:17:03.071580407Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:17:03.072120 containerd[1457]: time="2024-10-09T07:17:03.071617367Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:17:03.072120 containerd[1457]: time="2024-10-09T07:17:03.071640841Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:17:03.135352 systemd[1]: Started cri-containerd-2dca9087ee76c0afc90eba00bb617b51e1c7375e77c95feee826e4df6c5caae3.scope - libcontainer container 2dca9087ee76c0afc90eba00bb617b51e1c7375e77c95feee826e4df6c5caae3. Oct 9 07:17:03.232422 systemd-networkd[1373]: cali09e545f1519: Link UP Oct 9 07:17:03.233462 systemd-networkd[1373]: cali09e545f1519: Gained carrier Oct 9 07:17:03.282951 containerd[1457]: 2024-10-09 07:17:02.997 [INFO][3816] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975--2--2--4--dcc5873578.novalocal-k8s-coredns--6f6b679f8f--pckdt-eth0 coredns-6f6b679f8f- kube-system 6e2f0e11-0f1c-419e-b192-8ef6ffe93a48 721 0 2024-10-09 07:16:23 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3975-2-2-4-dcc5873578.novalocal coredns-6f6b679f8f-pckdt eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali09e545f1519 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="7c150999eff860cb6e415ca681733f8307396114b24a38ed6a288584c3028365" Namespace="kube-system" Pod="coredns-6f6b679f8f-pckdt" WorkloadEndpoint="ci--3975--2--2--4--dcc5873578.novalocal-k8s-coredns--6f6b679f8f--pckdt-" Oct 9 07:17:03.282951 containerd[1457]: 2024-10-09 07:17:02.998 [INFO][3816] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7c150999eff860cb6e415ca681733f8307396114b24a38ed6a288584c3028365" Namespace="kube-system" Pod="coredns-6f6b679f8f-pckdt" WorkloadEndpoint="ci--3975--2--2--4--dcc5873578.novalocal-k8s-coredns--6f6b679f8f--pckdt-eth0" Oct 9 07:17:03.282951 containerd[1457]: 2024-10-09 07:17:03.065 [INFO][3841] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7c150999eff860cb6e415ca681733f8307396114b24a38ed6a288584c3028365" HandleID="k8s-pod-network.7c150999eff860cb6e415ca681733f8307396114b24a38ed6a288584c3028365" Workload="ci--3975--2--2--4--dcc5873578.novalocal-k8s-coredns--6f6b679f8f--pckdt-eth0" Oct 9 07:17:03.282951 containerd[1457]: 2024-10-09 07:17:03.086 [INFO][3841] ipam_plugin.go 270: Auto assigning IP ContainerID="7c150999eff860cb6e415ca681733f8307396114b24a38ed6a288584c3028365" HandleID="k8s-pod-network.7c150999eff860cb6e415ca681733f8307396114b24a38ed6a288584c3028365" Workload="ci--3975--2--2--4--dcc5873578.novalocal-k8s-coredns--6f6b679f8f--pckdt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318400), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3975-2-2-4-dcc5873578.novalocal", "pod":"coredns-6f6b679f8f-pckdt", "timestamp":"2024-10-09 07:17:03.065397988 +0000 UTC"}, Hostname:"ci-3975-2-2-4-dcc5873578.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 07:17:03.282951 containerd[1457]: 2024-10-09 07:17:03.086 [INFO][3841] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:17:03.282951 containerd[1457]: 2024-10-09 07:17:03.086 [INFO][3841] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:17:03.282951 containerd[1457]: 2024-10-09 07:17:03.087 [INFO][3841] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975-2-2-4-dcc5873578.novalocal' Oct 9 07:17:03.282951 containerd[1457]: 2024-10-09 07:17:03.100 [INFO][3841] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7c150999eff860cb6e415ca681733f8307396114b24a38ed6a288584c3028365" host="ci-3975-2-2-4-dcc5873578.novalocal" Oct 9 07:17:03.282951 containerd[1457]: 2024-10-09 07:17:03.120 [INFO][3841] ipam.go 372: Looking up existing affinities for host host="ci-3975-2-2-4-dcc5873578.novalocal" Oct 9 07:17:03.282951 containerd[1457]: 2024-10-09 07:17:03.139 [INFO][3841] ipam.go 489: Trying affinity for 192.168.101.0/26 host="ci-3975-2-2-4-dcc5873578.novalocal" Oct 9 07:17:03.282951 containerd[1457]: 2024-10-09 07:17:03.152 [INFO][3841] ipam.go 155: Attempting to load block cidr=192.168.101.0/26 host="ci-3975-2-2-4-dcc5873578.novalocal" Oct 9 07:17:03.282951 containerd[1457]: 2024-10-09 07:17:03.161 [INFO][3841] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.101.0/26 host="ci-3975-2-2-4-dcc5873578.novalocal" Oct 9 07:17:03.282951 containerd[1457]: 2024-10-09 07:17:03.162 [INFO][3841] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.101.0/26 handle="k8s-pod-network.7c150999eff860cb6e415ca681733f8307396114b24a38ed6a288584c3028365" host="ci-3975-2-2-4-dcc5873578.novalocal" Oct 9 07:17:03.282951 containerd[1457]: 2024-10-09 07:17:03.168 [INFO][3841] ipam.go 1685: Creating new handle: k8s-pod-network.7c150999eff860cb6e415ca681733f8307396114b24a38ed6a288584c3028365 Oct 9 07:17:03.282951 containerd[1457]: 2024-10-09 07:17:03.191 [INFO][3841] ipam.go 1203: Writing block in order to claim IPs block=192.168.101.0/26 handle="k8s-pod-network.7c150999eff860cb6e415ca681733f8307396114b24a38ed6a288584c3028365" host="ci-3975-2-2-4-dcc5873578.novalocal" Oct 9 07:17:03.282951 containerd[1457]: 2024-10-09 07:17:03.214 [INFO][3841] ipam.go 1216: Successfully claimed IPs: [192.168.101.2/26] block=192.168.101.0/26 handle="k8s-pod-network.7c150999eff860cb6e415ca681733f8307396114b24a38ed6a288584c3028365" host="ci-3975-2-2-4-dcc5873578.novalocal" Oct 9 07:17:03.282951 containerd[1457]: 2024-10-09 07:17:03.215 [INFO][3841] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.101.2/26] handle="k8s-pod-network.7c150999eff860cb6e415ca681733f8307396114b24a38ed6a288584c3028365" host="ci-3975-2-2-4-dcc5873578.novalocal" Oct 9 07:17:03.282951 containerd[1457]: 2024-10-09 07:17:03.215 [INFO][3841] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:17:03.282951 containerd[1457]: 2024-10-09 07:17:03.215 [INFO][3841] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.101.2/26] IPv6=[] ContainerID="7c150999eff860cb6e415ca681733f8307396114b24a38ed6a288584c3028365" HandleID="k8s-pod-network.7c150999eff860cb6e415ca681733f8307396114b24a38ed6a288584c3028365" Workload="ci--3975--2--2--4--dcc5873578.novalocal-k8s-coredns--6f6b679f8f--pckdt-eth0" Oct 9 07:17:03.283730 containerd[1457]: 2024-10-09 07:17:03.226 [INFO][3816] k8s.go 386: Populated endpoint ContainerID="7c150999eff860cb6e415ca681733f8307396114b24a38ed6a288584c3028365" Namespace="kube-system" Pod="coredns-6f6b679f8f-pckdt" WorkloadEndpoint="ci--3975--2--2--4--dcc5873578.novalocal-k8s-coredns--6f6b679f8f--pckdt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--4--dcc5873578.novalocal-k8s-coredns--6f6b679f8f--pckdt-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"6e2f0e11-0f1c-419e-b192-8ef6ffe93a48", ResourceVersion:"721", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 16, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-4-dcc5873578.novalocal", ContainerID:"", Pod:"coredns-6f6b679f8f-pckdt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.101.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali09e545f1519", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:17:03.283730 containerd[1457]: 2024-10-09 07:17:03.227 [INFO][3816] k8s.go 387: Calico CNI using IPs: [192.168.101.2/32] ContainerID="7c150999eff860cb6e415ca681733f8307396114b24a38ed6a288584c3028365" Namespace="kube-system" Pod="coredns-6f6b679f8f-pckdt" WorkloadEndpoint="ci--3975--2--2--4--dcc5873578.novalocal-k8s-coredns--6f6b679f8f--pckdt-eth0" Oct 9 07:17:03.283730 containerd[1457]: 2024-10-09 07:17:03.227 [INFO][3816] dataplane_linux.go 68: Setting the host side veth name to cali09e545f1519 ContainerID="7c150999eff860cb6e415ca681733f8307396114b24a38ed6a288584c3028365" Namespace="kube-system" Pod="coredns-6f6b679f8f-pckdt" WorkloadEndpoint="ci--3975--2--2--4--dcc5873578.novalocal-k8s-coredns--6f6b679f8f--pckdt-eth0" Oct 9 07:17:03.283730 containerd[1457]: 2024-10-09 07:17:03.233 [INFO][3816] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="7c150999eff860cb6e415ca681733f8307396114b24a38ed6a288584c3028365" Namespace="kube-system" Pod="coredns-6f6b679f8f-pckdt" WorkloadEndpoint="ci--3975--2--2--4--dcc5873578.novalocal-k8s-coredns--6f6b679f8f--pckdt-eth0" Oct 9 07:17:03.283730 containerd[1457]: 2024-10-09 07:17:03.240 [INFO][3816] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7c150999eff860cb6e415ca681733f8307396114b24a38ed6a288584c3028365" Namespace="kube-system" Pod="coredns-6f6b679f8f-pckdt" WorkloadEndpoint="ci--3975--2--2--4--dcc5873578.novalocal-k8s-coredns--6f6b679f8f--pckdt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--4--dcc5873578.novalocal-k8s-coredns--6f6b679f8f--pckdt-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"6e2f0e11-0f1c-419e-b192-8ef6ffe93a48", ResourceVersion:"721", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 16, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-4-dcc5873578.novalocal", ContainerID:"7c150999eff860cb6e415ca681733f8307396114b24a38ed6a288584c3028365", Pod:"coredns-6f6b679f8f-pckdt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.101.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali09e545f1519", MAC:"96:94:a5:73:1f:d9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:17:03.283730 containerd[1457]: 2024-10-09 07:17:03.274 [INFO][3816] k8s.go 500: Wrote updated endpoint to datastore ContainerID="7c150999eff860cb6e415ca681733f8307396114b24a38ed6a288584c3028365" Namespace="kube-system" Pod="coredns-6f6b679f8f-pckdt" WorkloadEndpoint="ci--3975--2--2--4--dcc5873578.novalocal-k8s-coredns--6f6b679f8f--pckdt-eth0" Oct 9 07:17:03.283730 containerd[1457]: time="2024-10-09T07:17:03.282546826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-dl7ww,Uid:08f94e6c-424e-4778-9aa7-62a9cbd840ab,Namespace:kube-system,Attempt:1,} returns sandbox id \"2dca9087ee76c0afc90eba00bb617b51e1c7375e77c95feee826e4df6c5caae3\"" Oct 9 07:17:03.347454 containerd[1457]: time="2024-10-09T07:17:03.346266475Z" level=info msg="CreateContainer within sandbox \"2dca9087ee76c0afc90eba00bb617b51e1c7375e77c95feee826e4df6c5caae3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 9 07:17:03.360879 containerd[1457]: time="2024-10-09T07:17:03.360358920Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:17:03.360879 containerd[1457]: time="2024-10-09T07:17:03.360457897Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:17:03.360879 containerd[1457]: time="2024-10-09T07:17:03.360480942Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:17:03.360879 containerd[1457]: time="2024-10-09T07:17:03.360494597Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:17:03.404367 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount342239201.mount: Deactivated successfully. Oct 9 07:17:03.404503 systemd[1]: run-containerd-runc-k8s.io-7c150999eff860cb6e415ca681733f8307396114b24a38ed6a288584c3028365-runc.56HgNi.mount: Deactivated successfully. Oct 9 07:17:03.421837 containerd[1457]: time="2024-10-09T07:17:03.419480436Z" level=info msg="CreateContainer within sandbox \"2dca9087ee76c0afc90eba00bb617b51e1c7375e77c95feee826e4df6c5caae3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a893f840390f2171dd3e731f171e38c2e6f38749b4bb5a868cfcb003a048e2a2\"" Oct 9 07:17:03.422930 containerd[1457]: time="2024-10-09T07:17:03.422373182Z" level=info msg="StartContainer for \"a893f840390f2171dd3e731f171e38c2e6f38749b4bb5a868cfcb003a048e2a2\"" Oct 9 07:17:03.425285 systemd[1]: Started cri-containerd-7c150999eff860cb6e415ca681733f8307396114b24a38ed6a288584c3028365.scope - libcontainer container 7c150999eff860cb6e415ca681733f8307396114b24a38ed6a288584c3028365. Oct 9 07:17:03.446611 containerd[1457]: time="2024-10-09T07:17:03.445823850Z" level=info msg="StopPodSandbox for \"1123b521fba55693557e7f6f72a87f9837e0f978126665e7e9e30a28d3c5313f\"" Oct 9 07:17:03.485293 systemd[1]: Started cri-containerd-a893f840390f2171dd3e731f171e38c2e6f38749b4bb5a868cfcb003a048e2a2.scope - libcontainer container a893f840390f2171dd3e731f171e38c2e6f38749b4bb5a868cfcb003a048e2a2. Oct 9 07:17:03.574336 containerd[1457]: time="2024-10-09T07:17:03.573231317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-pckdt,Uid:6e2f0e11-0f1c-419e-b192-8ef6ffe93a48,Namespace:kube-system,Attempt:1,} returns sandbox id \"7c150999eff860cb6e415ca681733f8307396114b24a38ed6a288584c3028365\"" Oct 9 07:17:03.588740 containerd[1457]: time="2024-10-09T07:17:03.587771734Z" level=info msg="CreateContainer within sandbox \"7c150999eff860cb6e415ca681733f8307396114b24a38ed6a288584c3028365\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 9 07:17:03.624504 containerd[1457]: time="2024-10-09T07:17:03.624444981Z" level=info msg="StartContainer for \"a893f840390f2171dd3e731f171e38c2e6f38749b4bb5a868cfcb003a048e2a2\" returns successfully" Oct 9 07:17:03.655533 containerd[1457]: 2024-10-09 07:17:03.566 [INFO][3971] k8s.go 608: Cleaning up netns ContainerID="1123b521fba55693557e7f6f72a87f9837e0f978126665e7e9e30a28d3c5313f" Oct 9 07:17:03.655533 containerd[1457]: 2024-10-09 07:17:03.566 [INFO][3971] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="1123b521fba55693557e7f6f72a87f9837e0f978126665e7e9e30a28d3c5313f" iface="eth0" netns="/var/run/netns/cni-4548f70f-3ae3-25a4-7a09-b7f40140f184" Oct 9 07:17:03.655533 containerd[1457]: 2024-10-09 07:17:03.566 [INFO][3971] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="1123b521fba55693557e7f6f72a87f9837e0f978126665e7e9e30a28d3c5313f" iface="eth0" netns="/var/run/netns/cni-4548f70f-3ae3-25a4-7a09-b7f40140f184" Oct 9 07:17:03.655533 containerd[1457]: 2024-10-09 07:17:03.568 [INFO][3971] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="1123b521fba55693557e7f6f72a87f9837e0f978126665e7e9e30a28d3c5313f" iface="eth0" netns="/var/run/netns/cni-4548f70f-3ae3-25a4-7a09-b7f40140f184" Oct 9 07:17:03.655533 containerd[1457]: 2024-10-09 07:17:03.569 [INFO][3971] k8s.go 615: Releasing IP address(es) ContainerID="1123b521fba55693557e7f6f72a87f9837e0f978126665e7e9e30a28d3c5313f" Oct 9 07:17:03.655533 containerd[1457]: 2024-10-09 07:17:03.569 [INFO][3971] utils.go 188: Calico CNI releasing IP address ContainerID="1123b521fba55693557e7f6f72a87f9837e0f978126665e7e9e30a28d3c5313f" Oct 9 07:17:03.655533 containerd[1457]: 2024-10-09 07:17:03.632 [INFO][3992] ipam_plugin.go 417: Releasing address using handleID ContainerID="1123b521fba55693557e7f6f72a87f9837e0f978126665e7e9e30a28d3c5313f" HandleID="k8s-pod-network.1123b521fba55693557e7f6f72a87f9837e0f978126665e7e9e30a28d3c5313f" Workload="ci--3975--2--2--4--dcc5873578.novalocal-k8s-csi--node--driver--cpr56-eth0" Oct 9 07:17:03.655533 containerd[1457]: 2024-10-09 07:17:03.632 [INFO][3992] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:17:03.655533 containerd[1457]: 2024-10-09 07:17:03.632 [INFO][3992] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:17:03.655533 containerd[1457]: 2024-10-09 07:17:03.643 [WARNING][3992] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="1123b521fba55693557e7f6f72a87f9837e0f978126665e7e9e30a28d3c5313f" HandleID="k8s-pod-network.1123b521fba55693557e7f6f72a87f9837e0f978126665e7e9e30a28d3c5313f" Workload="ci--3975--2--2--4--dcc5873578.novalocal-k8s-csi--node--driver--cpr56-eth0" Oct 9 07:17:03.655533 containerd[1457]: 2024-10-09 07:17:03.643 [INFO][3992] ipam_plugin.go 445: Releasing address using workloadID ContainerID="1123b521fba55693557e7f6f72a87f9837e0f978126665e7e9e30a28d3c5313f" HandleID="k8s-pod-network.1123b521fba55693557e7f6f72a87f9837e0f978126665e7e9e30a28d3c5313f" Workload="ci--3975--2--2--4--dcc5873578.novalocal-k8s-csi--node--driver--cpr56-eth0" Oct 9 07:17:03.655533 containerd[1457]: 2024-10-09 07:17:03.646 [INFO][3992] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:17:03.655533 containerd[1457]: 2024-10-09 07:17:03.652 [INFO][3971] k8s.go 621: Teardown processing complete. ContainerID="1123b521fba55693557e7f6f72a87f9837e0f978126665e7e9e30a28d3c5313f" Oct 9 07:17:03.656085 containerd[1457]: time="2024-10-09T07:17:03.655820563Z" level=info msg="TearDown network for sandbox \"1123b521fba55693557e7f6f72a87f9837e0f978126665e7e9e30a28d3c5313f\" successfully" Oct 9 07:17:03.656085 containerd[1457]: time="2024-10-09T07:17:03.655853496Z" level=info msg="StopPodSandbox for \"1123b521fba55693557e7f6f72a87f9837e0f978126665e7e9e30a28d3c5313f\" returns successfully" Oct 9 07:17:03.656763 containerd[1457]: time="2024-10-09T07:17:03.656727419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cpr56,Uid:5460cbf3-3220-44d8-92e5-2d3cb02a666f,Namespace:calico-system,Attempt:1,}" Oct 9 07:17:03.713874 containerd[1457]: time="2024-10-09T07:17:03.713780419Z" level=info msg="CreateContainer within sandbox \"7c150999eff860cb6e415ca681733f8307396114b24a38ed6a288584c3028365\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c04229aefe19f569f2d9637595cc429f63310e8b3e2b8ec9aa0d870fdc8b1cf9\"" Oct 9 07:17:03.742069 containerd[1457]: time="2024-10-09T07:17:03.741527063Z" level=info msg="StartContainer for \"c04229aefe19f569f2d9637595cc429f63310e8b3e2b8ec9aa0d870fdc8b1cf9\"" Oct 9 07:17:03.791267 systemd[1]: Started cri-containerd-c04229aefe19f569f2d9637595cc429f63310e8b3e2b8ec9aa0d870fdc8b1cf9.scope - libcontainer container c04229aefe19f569f2d9637595cc429f63310e8b3e2b8ec9aa0d870fdc8b1cf9. Oct 9 07:17:03.853210 containerd[1457]: time="2024-10-09T07:17:03.853071596Z" level=info msg="StartContainer for \"c04229aefe19f569f2d9637595cc429f63310e8b3e2b8ec9aa0d870fdc8b1cf9\" returns successfully" Oct 9 07:17:03.930209 systemd-networkd[1373]: vxlan.calico: Gained IPv6LL Oct 9 07:17:04.033579 kubelet[2567]: I1009 07:17:04.005006 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-dl7ww" podStartSLOduration=41.004978495 podStartE2EDuration="41.004978495s" podCreationTimestamp="2024-10-09 07:16:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 07:17:03.978934893 +0000 UTC m=+46.775500343" watchObservedRunningTime="2024-10-09 07:17:04.004978495 +0000 UTC m=+46.801543945" Oct 9 07:17:04.036468 kubelet[2567]: I1009 07:17:04.035429 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-pckdt" podStartSLOduration=41.035403636 podStartE2EDuration="41.035403636s" podCreationTimestamp="2024-10-09 07:16:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 07:17:04.035381614 +0000 UTC m=+46.831947064" watchObservedRunningTime="2024-10-09 07:17:04.035403636 +0000 UTC m=+46.831969076" Oct 9 07:17:04.121084 systemd-networkd[1373]: caliab141e647bd: Link UP Oct 9 07:17:04.123264 systemd-networkd[1373]: caliab141e647bd: Gained carrier Oct 9 07:17:04.166792 containerd[1457]: 2024-10-09 07:17:03.849 [INFO][4015] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975--2--2--4--dcc5873578.novalocal-k8s-csi--node--driver--cpr56-eth0 csi-node-driver- calico-system 5460cbf3-3220-44d8-92e5-2d3cb02a666f 732 0 2024-10-09 07:16:30 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:779867c8f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ci-3975-2-2-4-dcc5873578.novalocal csi-node-driver-cpr56 eth0 default [] [] [kns.calico-system ksa.calico-system.default] caliab141e647bd [] []}} ContainerID="4324a4fe1e390d993c84b2201aa5b7c694a7ffdfa52ac2c488684f9f03768405" Namespace="calico-system" Pod="csi-node-driver-cpr56" WorkloadEndpoint="ci--3975--2--2--4--dcc5873578.novalocal-k8s-csi--node--driver--cpr56-" Oct 9 07:17:04.166792 containerd[1457]: 2024-10-09 07:17:03.849 [INFO][4015] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4324a4fe1e390d993c84b2201aa5b7c694a7ffdfa52ac2c488684f9f03768405" Namespace="calico-system" Pod="csi-node-driver-cpr56" WorkloadEndpoint="ci--3975--2--2--4--dcc5873578.novalocal-k8s-csi--node--driver--cpr56-eth0" Oct 9 07:17:04.166792 containerd[1457]: 2024-10-09 07:17:03.914 [INFO][4058] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4324a4fe1e390d993c84b2201aa5b7c694a7ffdfa52ac2c488684f9f03768405" HandleID="k8s-pod-network.4324a4fe1e390d993c84b2201aa5b7c694a7ffdfa52ac2c488684f9f03768405" Workload="ci--3975--2--2--4--dcc5873578.novalocal-k8s-csi--node--driver--cpr56-eth0" Oct 9 07:17:04.166792 containerd[1457]: 2024-10-09 07:17:04.036 [INFO][4058] ipam_plugin.go 270: Auto assigning IP ContainerID="4324a4fe1e390d993c84b2201aa5b7c694a7ffdfa52ac2c488684f9f03768405" HandleID="k8s-pod-network.4324a4fe1e390d993c84b2201aa5b7c694a7ffdfa52ac2c488684f9f03768405" Workload="ci--3975--2--2--4--dcc5873578.novalocal-k8s-csi--node--driver--cpr56-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318350), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3975-2-2-4-dcc5873578.novalocal", "pod":"csi-node-driver-cpr56", "timestamp":"2024-10-09 07:17:03.914447273 +0000 UTC"}, Hostname:"ci-3975-2-2-4-dcc5873578.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 07:17:04.166792 containerd[1457]: 2024-10-09 07:17:04.036 [INFO][4058] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:17:04.166792 containerd[1457]: 2024-10-09 07:17:04.036 [INFO][4058] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:17:04.166792 containerd[1457]: 2024-10-09 07:17:04.037 [INFO][4058] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975-2-2-4-dcc5873578.novalocal' Oct 9 07:17:04.166792 containerd[1457]: 2024-10-09 07:17:04.041 [INFO][4058] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4324a4fe1e390d993c84b2201aa5b7c694a7ffdfa52ac2c488684f9f03768405" host="ci-3975-2-2-4-dcc5873578.novalocal" Oct 9 07:17:04.166792 containerd[1457]: 2024-10-09 07:17:04.048 [INFO][4058] ipam.go 372: Looking up existing affinities for host host="ci-3975-2-2-4-dcc5873578.novalocal" Oct 9 07:17:04.166792 containerd[1457]: 2024-10-09 07:17:04.057 [INFO][4058] ipam.go 489: Trying affinity for 192.168.101.0/26 host="ci-3975-2-2-4-dcc5873578.novalocal" Oct 9 07:17:04.166792 containerd[1457]: 2024-10-09 07:17:04.060 [INFO][4058] ipam.go 155: Attempting to load block cidr=192.168.101.0/26 host="ci-3975-2-2-4-dcc5873578.novalocal" Oct 9 07:17:04.166792 containerd[1457]: 2024-10-09 07:17:04.063 [INFO][4058] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.101.0/26 host="ci-3975-2-2-4-dcc5873578.novalocal" Oct 9 07:17:04.166792 containerd[1457]: 2024-10-09 07:17:04.064 [INFO][4058] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.101.0/26 handle="k8s-pod-network.4324a4fe1e390d993c84b2201aa5b7c694a7ffdfa52ac2c488684f9f03768405" host="ci-3975-2-2-4-dcc5873578.novalocal" Oct 9 07:17:04.166792 containerd[1457]: 2024-10-09 07:17:04.067 [INFO][4058] ipam.go 1685: Creating new handle: k8s-pod-network.4324a4fe1e390d993c84b2201aa5b7c694a7ffdfa52ac2c488684f9f03768405 Oct 9 07:17:04.166792 containerd[1457]: 2024-10-09 07:17:04.079 [INFO][4058] ipam.go 1203: Writing block in order to claim IPs block=192.168.101.0/26 handle="k8s-pod-network.4324a4fe1e390d993c84b2201aa5b7c694a7ffdfa52ac2c488684f9f03768405" host="ci-3975-2-2-4-dcc5873578.novalocal" Oct 9 07:17:04.166792 containerd[1457]: 2024-10-09 07:17:04.112 [INFO][4058] ipam.go 1216: Successfully claimed IPs: [192.168.101.3/26] block=192.168.101.0/26 handle="k8s-pod-network.4324a4fe1e390d993c84b2201aa5b7c694a7ffdfa52ac2c488684f9f03768405" host="ci-3975-2-2-4-dcc5873578.novalocal" Oct 9 07:17:04.166792 containerd[1457]: 2024-10-09 07:17:04.112 [INFO][4058] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.101.3/26] handle="k8s-pod-network.4324a4fe1e390d993c84b2201aa5b7c694a7ffdfa52ac2c488684f9f03768405" host="ci-3975-2-2-4-dcc5873578.novalocal" Oct 9 07:17:04.166792 containerd[1457]: 2024-10-09 07:17:04.113 [INFO][4058] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:17:04.166792 containerd[1457]: 2024-10-09 07:17:04.113 [INFO][4058] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.101.3/26] IPv6=[] ContainerID="4324a4fe1e390d993c84b2201aa5b7c694a7ffdfa52ac2c488684f9f03768405" HandleID="k8s-pod-network.4324a4fe1e390d993c84b2201aa5b7c694a7ffdfa52ac2c488684f9f03768405" Workload="ci--3975--2--2--4--dcc5873578.novalocal-k8s-csi--node--driver--cpr56-eth0" Oct 9 07:17:04.169396 containerd[1457]: 2024-10-09 07:17:04.117 [INFO][4015] k8s.go 386: Populated endpoint ContainerID="4324a4fe1e390d993c84b2201aa5b7c694a7ffdfa52ac2c488684f9f03768405" Namespace="calico-system" Pod="csi-node-driver-cpr56" WorkloadEndpoint="ci--3975--2--2--4--dcc5873578.novalocal-k8s-csi--node--driver--cpr56-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--4--dcc5873578.novalocal-k8s-csi--node--driver--cpr56-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5460cbf3-3220-44d8-92e5-2d3cb02a666f", ResourceVersion:"732", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 16, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"779867c8f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-4-dcc5873578.novalocal", ContainerID:"", Pod:"csi-node-driver-cpr56", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.101.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"caliab141e647bd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:17:04.169396 containerd[1457]: 2024-10-09 07:17:04.117 [INFO][4015] k8s.go 387: Calico CNI using IPs: [192.168.101.3/32] ContainerID="4324a4fe1e390d993c84b2201aa5b7c694a7ffdfa52ac2c488684f9f03768405" Namespace="calico-system" Pod="csi-node-driver-cpr56" WorkloadEndpoint="ci--3975--2--2--4--dcc5873578.novalocal-k8s-csi--node--driver--cpr56-eth0" Oct 9 07:17:04.169396 containerd[1457]: 2024-10-09 07:17:04.117 [INFO][4015] dataplane_linux.go 68: Setting the host side veth name to caliab141e647bd ContainerID="4324a4fe1e390d993c84b2201aa5b7c694a7ffdfa52ac2c488684f9f03768405" Namespace="calico-system" Pod="csi-node-driver-cpr56" WorkloadEndpoint="ci--3975--2--2--4--dcc5873578.novalocal-k8s-csi--node--driver--cpr56-eth0" Oct 9 07:17:04.169396 containerd[1457]: 2024-10-09 07:17:04.123 [INFO][4015] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="4324a4fe1e390d993c84b2201aa5b7c694a7ffdfa52ac2c488684f9f03768405" Namespace="calico-system" Pod="csi-node-driver-cpr56" WorkloadEndpoint="ci--3975--2--2--4--dcc5873578.novalocal-k8s-csi--node--driver--cpr56-eth0" Oct 9 07:17:04.169396 containerd[1457]: 2024-10-09 07:17:04.125 [INFO][4015] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4324a4fe1e390d993c84b2201aa5b7c694a7ffdfa52ac2c488684f9f03768405" Namespace="calico-system" Pod="csi-node-driver-cpr56" WorkloadEndpoint="ci--3975--2--2--4--dcc5873578.novalocal-k8s-csi--node--driver--cpr56-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--4--dcc5873578.novalocal-k8s-csi--node--driver--cpr56-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5460cbf3-3220-44d8-92e5-2d3cb02a666f", ResourceVersion:"732", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 16, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"779867c8f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-4-dcc5873578.novalocal", ContainerID:"4324a4fe1e390d993c84b2201aa5b7c694a7ffdfa52ac2c488684f9f03768405", Pod:"csi-node-driver-cpr56", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.101.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"caliab141e647bd", MAC:"7e:dc:08:6c:3a:92", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:17:04.169396 containerd[1457]: 2024-10-09 07:17:04.162 [INFO][4015] k8s.go 500: Wrote updated endpoint to datastore ContainerID="4324a4fe1e390d993c84b2201aa5b7c694a7ffdfa52ac2c488684f9f03768405" Namespace="calico-system" Pod="csi-node-driver-cpr56" WorkloadEndpoint="ci--3975--2--2--4--dcc5873578.novalocal-k8s-csi--node--driver--cpr56-eth0" Oct 9 07:17:04.185774 systemd[1]: run-netns-cni\x2d4548f70f\x2d3ae3\x2d25a4\x2d7a09\x2db7f40140f184.mount: Deactivated successfully. Oct 9 07:17:04.223847 containerd[1457]: time="2024-10-09T07:17:04.223638573Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:17:04.223847 containerd[1457]: time="2024-10-09T07:17:04.223716331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:17:04.223847 containerd[1457]: time="2024-10-09T07:17:04.223743092Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:17:04.224238 containerd[1457]: time="2024-10-09T07:17:04.223761889Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:17:04.249255 systemd-networkd[1373]: calie064d81dc2b: Gained IPv6LL Oct 9 07:17:04.259779 systemd[1]: run-containerd-runc-k8s.io-4324a4fe1e390d993c84b2201aa5b7c694a7ffdfa52ac2c488684f9f03768405-runc.rlka1h.mount: Deactivated successfully. Oct 9 07:17:04.276424 systemd[1]: Started cri-containerd-4324a4fe1e390d993c84b2201aa5b7c694a7ffdfa52ac2c488684f9f03768405.scope - libcontainer container 4324a4fe1e390d993c84b2201aa5b7c694a7ffdfa52ac2c488684f9f03768405. Oct 9 07:17:04.329459 containerd[1457]: time="2024-10-09T07:17:04.329195243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cpr56,Uid:5460cbf3-3220-44d8-92e5-2d3cb02a666f,Namespace:calico-system,Attempt:1,} returns sandbox id \"4324a4fe1e390d993c84b2201aa5b7c694a7ffdfa52ac2c488684f9f03768405\"" Oct 9 07:17:04.344494 containerd[1457]: time="2024-10-09T07:17:04.344340472Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\"" Oct 9 07:17:05.017543 systemd-networkd[1373]: cali09e545f1519: Gained IPv6LL Oct 9 07:17:05.443903 containerd[1457]: time="2024-10-09T07:17:05.443532807Z" level=info msg="StopPodSandbox for \"40ec5ec3486a4649fc04440b65de0f26c8102ca6372e49d2fa0988c122aa271d\"" Oct 9 07:17:05.583820 containerd[1457]: 2024-10-09 07:17:05.528 [INFO][4142] k8s.go 608: Cleaning up netns ContainerID="40ec5ec3486a4649fc04440b65de0f26c8102ca6372e49d2fa0988c122aa271d" Oct 9 07:17:05.583820 containerd[1457]: 2024-10-09 07:17:05.528 [INFO][4142] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="40ec5ec3486a4649fc04440b65de0f26c8102ca6372e49d2fa0988c122aa271d" iface="eth0" netns="/var/run/netns/cni-bfd6b108-d261-315f-8e5c-77d4239e6ccf" Oct 9 07:17:05.583820 containerd[1457]: 2024-10-09 07:17:05.528 [INFO][4142] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="40ec5ec3486a4649fc04440b65de0f26c8102ca6372e49d2fa0988c122aa271d" iface="eth0" netns="/var/run/netns/cni-bfd6b108-d261-315f-8e5c-77d4239e6ccf" Oct 9 07:17:05.583820 containerd[1457]: 2024-10-09 07:17:05.529 [INFO][4142] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="40ec5ec3486a4649fc04440b65de0f26c8102ca6372e49d2fa0988c122aa271d" iface="eth0" netns="/var/run/netns/cni-bfd6b108-d261-315f-8e5c-77d4239e6ccf" Oct 9 07:17:05.583820 containerd[1457]: 2024-10-09 07:17:05.529 [INFO][4142] k8s.go 615: Releasing IP address(es) ContainerID="40ec5ec3486a4649fc04440b65de0f26c8102ca6372e49d2fa0988c122aa271d" Oct 9 07:17:05.583820 containerd[1457]: 2024-10-09 07:17:05.529 [INFO][4142] utils.go 188: Calico CNI releasing IP address ContainerID="40ec5ec3486a4649fc04440b65de0f26c8102ca6372e49d2fa0988c122aa271d" Oct 9 07:17:05.583820 containerd[1457]: 2024-10-09 07:17:05.565 [INFO][4149] ipam_plugin.go 417: Releasing address using handleID ContainerID="40ec5ec3486a4649fc04440b65de0f26c8102ca6372e49d2fa0988c122aa271d" HandleID="k8s-pod-network.40ec5ec3486a4649fc04440b65de0f26c8102ca6372e49d2fa0988c122aa271d" Workload="ci--3975--2--2--4--dcc5873578.novalocal-k8s-calico--kube--controllers--c4bcf989c--7nvgq-eth0" Oct 9 07:17:05.583820 containerd[1457]: 2024-10-09 07:17:05.565 [INFO][4149] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:17:05.583820 containerd[1457]: 2024-10-09 07:17:05.567 [INFO][4149] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:17:05.583820 containerd[1457]: 2024-10-09 07:17:05.577 [WARNING][4149] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="40ec5ec3486a4649fc04440b65de0f26c8102ca6372e49d2fa0988c122aa271d" HandleID="k8s-pod-network.40ec5ec3486a4649fc04440b65de0f26c8102ca6372e49d2fa0988c122aa271d" Workload="ci--3975--2--2--4--dcc5873578.novalocal-k8s-calico--kube--controllers--c4bcf989c--7nvgq-eth0" Oct 9 07:17:05.583820 containerd[1457]: 2024-10-09 07:17:05.577 [INFO][4149] ipam_plugin.go 445: Releasing address using workloadID ContainerID="40ec5ec3486a4649fc04440b65de0f26c8102ca6372e49d2fa0988c122aa271d" HandleID="k8s-pod-network.40ec5ec3486a4649fc04440b65de0f26c8102ca6372e49d2fa0988c122aa271d" Workload="ci--3975--2--2--4--dcc5873578.novalocal-k8s-calico--kube--controllers--c4bcf989c--7nvgq-eth0" Oct 9 07:17:05.583820 containerd[1457]: 2024-10-09 07:17:05.579 [INFO][4149] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:17:05.583820 containerd[1457]: 2024-10-09 07:17:05.582 [INFO][4142] k8s.go 621: Teardown processing complete. ContainerID="40ec5ec3486a4649fc04440b65de0f26c8102ca6372e49d2fa0988c122aa271d" Oct 9 07:17:05.586245 containerd[1457]: time="2024-10-09T07:17:05.583967655Z" level=info msg="TearDown network for sandbox \"40ec5ec3486a4649fc04440b65de0f26c8102ca6372e49d2fa0988c122aa271d\" successfully" Oct 9 07:17:05.586245 containerd[1457]: time="2024-10-09T07:17:05.583997101Z" level=info msg="StopPodSandbox for \"40ec5ec3486a4649fc04440b65de0f26c8102ca6372e49d2fa0988c122aa271d\" returns successfully" Oct 9 07:17:05.587040 containerd[1457]: time="2024-10-09T07:17:05.586295042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c4bcf989c-7nvgq,Uid:d2215775-413f-4eb4-8f14-bbae43713b31,Namespace:calico-system,Attempt:1,}" Oct 9 07:17:05.589762 systemd[1]: run-netns-cni\x2dbfd6b108\x2dd261\x2d315f\x2d8e5c\x2d77d4239e6ccf.mount: Deactivated successfully. Oct 9 07:17:05.785216 systemd-networkd[1373]: caliab141e647bd: Gained IPv6LL Oct 9 07:17:05.808558 systemd-networkd[1373]: cali76d5f15d083: Link UP Oct 9 07:17:05.808767 systemd-networkd[1373]: cali76d5f15d083: Gained carrier Oct 9 07:17:05.839635 containerd[1457]: 2024-10-09 07:17:05.682 [INFO][4160] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975--2--2--4--dcc5873578.novalocal-k8s-calico--kube--controllers--c4bcf989c--7nvgq-eth0 calico-kube-controllers-c4bcf989c- calico-system d2215775-413f-4eb4-8f14-bbae43713b31 764 0 2024-10-09 07:16:30 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:c4bcf989c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-3975-2-2-4-dcc5873578.novalocal calico-kube-controllers-c4bcf989c-7nvgq eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali76d5f15d083 [] []}} ContainerID="407ec4cfe465c1ba02d142d61d08506d2b32fb609694a9e8e7fedb4fdc582e6b" Namespace="calico-system" Pod="calico-kube-controllers-c4bcf989c-7nvgq" WorkloadEndpoint="ci--3975--2--2--4--dcc5873578.novalocal-k8s-calico--kube--controllers--c4bcf989c--7nvgq-" Oct 9 07:17:05.839635 containerd[1457]: 2024-10-09 07:17:05.683 [INFO][4160] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="407ec4cfe465c1ba02d142d61d08506d2b32fb609694a9e8e7fedb4fdc582e6b" Namespace="calico-system" Pod="calico-kube-controllers-c4bcf989c-7nvgq" WorkloadEndpoint="ci--3975--2--2--4--dcc5873578.novalocal-k8s-calico--kube--controllers--c4bcf989c--7nvgq-eth0" Oct 9 07:17:05.839635 containerd[1457]: 2024-10-09 07:17:05.736 [INFO][4168] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="407ec4cfe465c1ba02d142d61d08506d2b32fb609694a9e8e7fedb4fdc582e6b" HandleID="k8s-pod-network.407ec4cfe465c1ba02d142d61d08506d2b32fb609694a9e8e7fedb4fdc582e6b" Workload="ci--3975--2--2--4--dcc5873578.novalocal-k8s-calico--kube--controllers--c4bcf989c--7nvgq-eth0" Oct 9 07:17:05.839635 containerd[1457]: 2024-10-09 07:17:05.754 [INFO][4168] ipam_plugin.go 270: Auto assigning IP ContainerID="407ec4cfe465c1ba02d142d61d08506d2b32fb609694a9e8e7fedb4fdc582e6b" HandleID="k8s-pod-network.407ec4cfe465c1ba02d142d61d08506d2b32fb609694a9e8e7fedb4fdc582e6b" Workload="ci--3975--2--2--4--dcc5873578.novalocal-k8s-calico--kube--controllers--c4bcf989c--7nvgq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000050740), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3975-2-2-4-dcc5873578.novalocal", "pod":"calico-kube-controllers-c4bcf989c-7nvgq", "timestamp":"2024-10-09 07:17:05.736726446 +0000 UTC"}, Hostname:"ci-3975-2-2-4-dcc5873578.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 07:17:05.839635 containerd[1457]: 2024-10-09 07:17:05.754 [INFO][4168] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:17:05.839635 containerd[1457]: 2024-10-09 07:17:05.754 [INFO][4168] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:17:05.839635 containerd[1457]: 2024-10-09 07:17:05.754 [INFO][4168] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975-2-2-4-dcc5873578.novalocal' Oct 9 07:17:05.839635 containerd[1457]: 2024-10-09 07:17:05.759 [INFO][4168] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.407ec4cfe465c1ba02d142d61d08506d2b32fb609694a9e8e7fedb4fdc582e6b" host="ci-3975-2-2-4-dcc5873578.novalocal" Oct 9 07:17:05.839635 containerd[1457]: 2024-10-09 07:17:05.764 [INFO][4168] ipam.go 372: Looking up existing affinities for host host="ci-3975-2-2-4-dcc5873578.novalocal" Oct 9 07:17:05.839635 containerd[1457]: 2024-10-09 07:17:05.772 [INFO][4168] ipam.go 489: Trying affinity for 192.168.101.0/26 host="ci-3975-2-2-4-dcc5873578.novalocal" Oct 9 07:17:05.839635 containerd[1457]: 2024-10-09 07:17:05.777 [INFO][4168] ipam.go 155: Attempting to load block cidr=192.168.101.0/26 host="ci-3975-2-2-4-dcc5873578.novalocal" Oct 9 07:17:05.839635 containerd[1457]: 2024-10-09 07:17:05.782 [INFO][4168] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.101.0/26 host="ci-3975-2-2-4-dcc5873578.novalocal" Oct 9 07:17:05.839635 containerd[1457]: 2024-10-09 07:17:05.782 [INFO][4168] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.101.0/26 handle="k8s-pod-network.407ec4cfe465c1ba02d142d61d08506d2b32fb609694a9e8e7fedb4fdc582e6b" host="ci-3975-2-2-4-dcc5873578.novalocal" Oct 9 07:17:05.839635 containerd[1457]: 2024-10-09 07:17:05.784 [INFO][4168] ipam.go 1685: Creating new handle: k8s-pod-network.407ec4cfe465c1ba02d142d61d08506d2b32fb609694a9e8e7fedb4fdc582e6b Oct 9 07:17:05.839635 containerd[1457]: 2024-10-09 07:17:05.791 [INFO][4168] ipam.go 1203: Writing block in order to claim IPs block=192.168.101.0/26 handle="k8s-pod-network.407ec4cfe465c1ba02d142d61d08506d2b32fb609694a9e8e7fedb4fdc582e6b" host="ci-3975-2-2-4-dcc5873578.novalocal" Oct 9 07:17:05.839635 containerd[1457]: 2024-10-09 07:17:05.801 [INFO][4168] ipam.go 1216: Successfully claimed IPs: [192.168.101.4/26] block=192.168.101.0/26 handle="k8s-pod-network.407ec4cfe465c1ba02d142d61d08506d2b32fb609694a9e8e7fedb4fdc582e6b" host="ci-3975-2-2-4-dcc5873578.novalocal" Oct 9 07:17:05.839635 containerd[1457]: 2024-10-09 07:17:05.801 [INFO][4168] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.101.4/26] handle="k8s-pod-network.407ec4cfe465c1ba02d142d61d08506d2b32fb609694a9e8e7fedb4fdc582e6b" host="ci-3975-2-2-4-dcc5873578.novalocal" Oct 9 07:17:05.839635 containerd[1457]: 2024-10-09 07:17:05.801 [INFO][4168] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:17:05.839635 containerd[1457]: 2024-10-09 07:17:05.801 [INFO][4168] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.101.4/26] IPv6=[] ContainerID="407ec4cfe465c1ba02d142d61d08506d2b32fb609694a9e8e7fedb4fdc582e6b" HandleID="k8s-pod-network.407ec4cfe465c1ba02d142d61d08506d2b32fb609694a9e8e7fedb4fdc582e6b" Workload="ci--3975--2--2--4--dcc5873578.novalocal-k8s-calico--kube--controllers--c4bcf989c--7nvgq-eth0" Oct 9 07:17:05.841932 containerd[1457]: 2024-10-09 07:17:05.804 [INFO][4160] k8s.go 386: Populated endpoint ContainerID="407ec4cfe465c1ba02d142d61d08506d2b32fb609694a9e8e7fedb4fdc582e6b" Namespace="calico-system" Pod="calico-kube-controllers-c4bcf989c-7nvgq" WorkloadEndpoint="ci--3975--2--2--4--dcc5873578.novalocal-k8s-calico--kube--controllers--c4bcf989c--7nvgq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--4--dcc5873578.novalocal-k8s-calico--kube--controllers--c4bcf989c--7nvgq-eth0", GenerateName:"calico-kube-controllers-c4bcf989c-", Namespace:"calico-system", SelfLink:"", UID:"d2215775-413f-4eb4-8f14-bbae43713b31", ResourceVersion:"764", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 16, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c4bcf989c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-4-dcc5873578.novalocal", ContainerID:"", Pod:"calico-kube-controllers-c4bcf989c-7nvgq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.101.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali76d5f15d083", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:17:05.841932 containerd[1457]: 2024-10-09 07:17:05.804 [INFO][4160] k8s.go 387: Calico CNI using IPs: [192.168.101.4/32] ContainerID="407ec4cfe465c1ba02d142d61d08506d2b32fb609694a9e8e7fedb4fdc582e6b" Namespace="calico-system" Pod="calico-kube-controllers-c4bcf989c-7nvgq" WorkloadEndpoint="ci--3975--2--2--4--dcc5873578.novalocal-k8s-calico--kube--controllers--c4bcf989c--7nvgq-eth0" Oct 9 07:17:05.841932 containerd[1457]: 2024-10-09 07:17:05.805 [INFO][4160] dataplane_linux.go 68: Setting the host side veth name to cali76d5f15d083 ContainerID="407ec4cfe465c1ba02d142d61d08506d2b32fb609694a9e8e7fedb4fdc582e6b" Namespace="calico-system" Pod="calico-kube-controllers-c4bcf989c-7nvgq" WorkloadEndpoint="ci--3975--2--2--4--dcc5873578.novalocal-k8s-calico--kube--controllers--c4bcf989c--7nvgq-eth0" Oct 9 07:17:05.841932 containerd[1457]: 2024-10-09 07:17:05.809 [INFO][4160] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="407ec4cfe465c1ba02d142d61d08506d2b32fb609694a9e8e7fedb4fdc582e6b" Namespace="calico-system" Pod="calico-kube-controllers-c4bcf989c-7nvgq" WorkloadEndpoint="ci--3975--2--2--4--dcc5873578.novalocal-k8s-calico--kube--controllers--c4bcf989c--7nvgq-eth0" Oct 9 07:17:05.841932 containerd[1457]: 2024-10-09 07:17:05.809 [INFO][4160] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="407ec4cfe465c1ba02d142d61d08506d2b32fb609694a9e8e7fedb4fdc582e6b" Namespace="calico-system" Pod="calico-kube-controllers-c4bcf989c-7nvgq" WorkloadEndpoint="ci--3975--2--2--4--dcc5873578.novalocal-k8s-calico--kube--controllers--c4bcf989c--7nvgq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--4--dcc5873578.novalocal-k8s-calico--kube--controllers--c4bcf989c--7nvgq-eth0", GenerateName:"calico-kube-controllers-c4bcf989c-", Namespace:"calico-system", SelfLink:"", UID:"d2215775-413f-4eb4-8f14-bbae43713b31", ResourceVersion:"764", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 16, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c4bcf989c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-4-dcc5873578.novalocal", ContainerID:"407ec4cfe465c1ba02d142d61d08506d2b32fb609694a9e8e7fedb4fdc582e6b", Pod:"calico-kube-controllers-c4bcf989c-7nvgq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.101.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali76d5f15d083", MAC:"c6:d5:d1:7a:81:4a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:17:05.841932 containerd[1457]: 2024-10-09 07:17:05.835 [INFO][4160] k8s.go 500: Wrote updated endpoint to datastore ContainerID="407ec4cfe465c1ba02d142d61d08506d2b32fb609694a9e8e7fedb4fdc582e6b" Namespace="calico-system" Pod="calico-kube-controllers-c4bcf989c-7nvgq" WorkloadEndpoint="ci--3975--2--2--4--dcc5873578.novalocal-k8s-calico--kube--controllers--c4bcf989c--7nvgq-eth0" Oct 9 07:17:05.909235 containerd[1457]: time="2024-10-09T07:17:05.908733227Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:17:05.909235 containerd[1457]: time="2024-10-09T07:17:05.908803059Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:17:05.909235 containerd[1457]: time="2024-10-09T07:17:05.908822196Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:17:05.909235 containerd[1457]: time="2024-10-09T07:17:05.908835330Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:17:05.955256 systemd[1]: Started cri-containerd-407ec4cfe465c1ba02d142d61d08506d2b32fb609694a9e8e7fedb4fdc582e6b.scope - libcontainer container 407ec4cfe465c1ba02d142d61d08506d2b32fb609694a9e8e7fedb4fdc582e6b. Oct 9 07:17:06.131285 containerd[1457]: time="2024-10-09T07:17:06.131208095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c4bcf989c-7nvgq,Uid:d2215775-413f-4eb4-8f14-bbae43713b31,Namespace:calico-system,Attempt:1,} returns sandbox id \"407ec4cfe465c1ba02d142d61d08506d2b32fb609694a9e8e7fedb4fdc582e6b\"" Oct 9 07:17:06.597439 containerd[1457]: time="2024-10-09T07:17:06.597107476Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:17:06.607072 containerd[1457]: time="2024-10-09T07:17:06.605965118Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.1: active requests=0, bytes read=7642081" Oct 9 07:17:06.607500 containerd[1457]: time="2024-10-09T07:17:06.607286059Z" level=info msg="ImageCreate event name:\"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:17:06.755138 containerd[1457]: time="2024-10-09T07:17:06.755084085Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:17:06.760125 containerd[1457]: time="2024-10-09T07:17:06.760076508Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.1\" with image id \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\", size \"9134482\" in 2.41568541s" Oct 9 07:17:06.760125 containerd[1457]: time="2024-10-09T07:17:06.760128777Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\" returns image reference \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\"" Oct 9 07:17:06.767913 containerd[1457]: time="2024-10-09T07:17:06.767864456Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\"" Oct 9 07:17:06.775451 containerd[1457]: time="2024-10-09T07:17:06.775412358Z" level=info msg="CreateContainer within sandbox \"4324a4fe1e390d993c84b2201aa5b7c694a7ffdfa52ac2c488684f9f03768405\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Oct 9 07:17:06.861296 containerd[1457]: time="2024-10-09T07:17:06.861241605Z" level=info msg="CreateContainer within sandbox \"4324a4fe1e390d993c84b2201aa5b7c694a7ffdfa52ac2c488684f9f03768405\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"5df5f463bb7c12de7028be336d50d6641dd8ea04180d53ab7dd0f6ae80116310\"" Oct 9 07:17:06.862003 containerd[1457]: time="2024-10-09T07:17:06.861963979Z" level=info msg="StartContainer for \"5df5f463bb7c12de7028be336d50d6641dd8ea04180d53ab7dd0f6ae80116310\"" Oct 9 07:17:06.962533 systemd[1]: Started cri-containerd-5df5f463bb7c12de7028be336d50d6641dd8ea04180d53ab7dd0f6ae80116310.scope - libcontainer container 5df5f463bb7c12de7028be336d50d6641dd8ea04180d53ab7dd0f6ae80116310. Oct 9 07:17:07.054968 containerd[1457]: time="2024-10-09T07:17:07.054887293Z" level=info msg="StartContainer for \"5df5f463bb7c12de7028be336d50d6641dd8ea04180d53ab7dd0f6ae80116310\" returns successfully" Oct 9 07:17:07.578569 systemd-networkd[1373]: cali76d5f15d083: Gained IPv6LL Oct 9 07:17:10.442851 containerd[1457]: time="2024-10-09T07:17:10.441269639Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:17:10.444745 containerd[1457]: time="2024-10-09T07:17:10.444668874Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.1: active requests=0, bytes read=33507125" Oct 9 07:17:10.459452 containerd[1457]: time="2024-10-09T07:17:10.459258559Z" level=info msg="ImageCreate event name:\"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:17:10.475613 containerd[1457]: time="2024-10-09T07:17:10.475244637Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:17:10.477882 containerd[1457]: time="2024-10-09T07:17:10.477569662Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" with image id \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\", size \"34999494\" in 3.709643398s" Oct 9 07:17:10.477882 containerd[1457]: time="2024-10-09T07:17:10.477666135Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" returns image reference \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\"" Oct 9 07:17:10.481557 containerd[1457]: time="2024-10-09T07:17:10.481421566Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\"" Oct 9 07:17:10.558995 containerd[1457]: time="2024-10-09T07:17:10.558609173Z" level=info msg="CreateContainer within sandbox \"407ec4cfe465c1ba02d142d61d08506d2b32fb609694a9e8e7fedb4fdc582e6b\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Oct 9 07:17:10.579894 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3937043809.mount: Deactivated successfully. Oct 9 07:17:10.586561 containerd[1457]: time="2024-10-09T07:17:10.586438931Z" level=info msg="CreateContainer within sandbox \"407ec4cfe465c1ba02d142d61d08506d2b32fb609694a9e8e7fedb4fdc582e6b\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"efe458fc0d4b64873a5b38dc8688e03734500b39b2714b860ad390affc0decaa\"" Oct 9 07:17:10.587213 containerd[1457]: time="2024-10-09T07:17:10.587107270Z" level=info msg="StartContainer for \"efe458fc0d4b64873a5b38dc8688e03734500b39b2714b860ad390affc0decaa\"" Oct 9 07:17:10.708166 systemd[1]: Started cri-containerd-efe458fc0d4b64873a5b38dc8688e03734500b39b2714b860ad390affc0decaa.scope - libcontainer container efe458fc0d4b64873a5b38dc8688e03734500b39b2714b860ad390affc0decaa. Oct 9 07:17:11.186073 containerd[1457]: time="2024-10-09T07:17:11.185000074Z" level=info msg="StartContainer for \"efe458fc0d4b64873a5b38dc8688e03734500b39b2714b860ad390affc0decaa\" returns successfully" Oct 9 07:17:12.292459 kubelet[2567]: I1009 07:17:12.291883 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-c4bcf989c-7nvgq" podStartSLOduration=37.946109757 podStartE2EDuration="42.291792597s" podCreationTimestamp="2024-10-09 07:16:30 +0000 UTC" firstStartedPulling="2024-10-09 07:17:06.134314312 +0000 UTC m=+48.930879752" lastFinishedPulling="2024-10-09 07:17:10.479997102 +0000 UTC m=+53.276562592" observedRunningTime="2024-10-09 07:17:12.283759131 +0000 UTC m=+55.080324621" watchObservedRunningTime="2024-10-09 07:17:12.291792597 +0000 UTC m=+55.088358087" Oct 9 07:17:12.324596 systemd[1]: run-containerd-runc-k8s.io-efe458fc0d4b64873a5b38dc8688e03734500b39b2714b860ad390affc0decaa-runc.Uye4I1.mount: Deactivated successfully. Oct 9 07:17:13.023473 containerd[1457]: time="2024-10-09T07:17:13.023400628Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:17:13.025143 containerd[1457]: time="2024-10-09T07:17:13.025102126Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1: active requests=0, bytes read=12907822" Oct 9 07:17:13.025610 containerd[1457]: time="2024-10-09T07:17:13.025582067Z" level=info msg="ImageCreate event name:\"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:17:13.028570 containerd[1457]: time="2024-10-09T07:17:13.028514762Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:17:13.029465 containerd[1457]: time="2024-10-09T07:17:13.029431761Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" with image id \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\", size \"14400175\" in 2.547946244s" Oct 9 07:17:13.029559 containerd[1457]: time="2024-10-09T07:17:13.029541640Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" returns image reference \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\"" Oct 9 07:17:13.033108 containerd[1457]: time="2024-10-09T07:17:13.033069403Z" level=info msg="CreateContainer within sandbox \"4324a4fe1e390d993c84b2201aa5b7c694a7ffdfa52ac2c488684f9f03768405\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Oct 9 07:17:13.051837 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2139039510.mount: Deactivated successfully. Oct 9 07:17:13.068468 containerd[1457]: time="2024-10-09T07:17:13.068301008Z" level=info msg="CreateContainer within sandbox \"4324a4fe1e390d993c84b2201aa5b7c694a7ffdfa52ac2c488684f9f03768405\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"2dea94e53f9c46b60fc90a633b96ada1c90f4803daee73a204f23b99479ffbce\"" Oct 9 07:17:13.072679 containerd[1457]: time="2024-10-09T07:17:13.069114652Z" level=info msg="StartContainer for \"2dea94e53f9c46b60fc90a633b96ada1c90f4803daee73a204f23b99479ffbce\"" Oct 9 07:17:13.118250 systemd[1]: Started cri-containerd-2dea94e53f9c46b60fc90a633b96ada1c90f4803daee73a204f23b99479ffbce.scope - libcontainer container 2dea94e53f9c46b60fc90a633b96ada1c90f4803daee73a204f23b99479ffbce. Oct 9 07:17:13.152542 containerd[1457]: time="2024-10-09T07:17:13.152491765Z" level=info msg="StartContainer for \"2dea94e53f9c46b60fc90a633b96ada1c90f4803daee73a204f23b99479ffbce\" returns successfully" Oct 9 07:17:13.266652 kubelet[2567]: I1009 07:17:13.266512 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-cpr56" podStartSLOduration=34.568295922 podStartE2EDuration="43.266489024s" podCreationTimestamp="2024-10-09 07:16:30 +0000 UTC" firstStartedPulling="2024-10-09 07:17:04.332708789 +0000 UTC m=+47.129274229" lastFinishedPulling="2024-10-09 07:17:13.030901891 +0000 UTC m=+55.827467331" observedRunningTime="2024-10-09 07:17:13.265376984 +0000 UTC m=+56.061942464" watchObservedRunningTime="2024-10-09 07:17:13.266489024 +0000 UTC m=+56.063054464" Oct 9 07:17:13.843504 kubelet[2567]: I1009 07:17:13.843401 2567 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Oct 9 07:17:13.845909 kubelet[2567]: I1009 07:17:13.845844 2567 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Oct 9 07:17:15.954619 systemd[1]: Created slice kubepods-besteffort-pod32a03057_86d8_4c72_85b3_3fc3dac16ad3.slice - libcontainer container kubepods-besteffort-pod32a03057_86d8_4c72_85b3_3fc3dac16ad3.slice. Oct 9 07:17:15.973831 systemd[1]: Created slice kubepods-besteffort-pod2427821b_fc27_4c33_89b4_e2a95dea5dad.slice - libcontainer container kubepods-besteffort-pod2427821b_fc27_4c33_89b4_e2a95dea5dad.slice. Oct 9 07:17:16.026432 kubelet[2567]: I1009 07:17:16.026222 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/32a03057-86d8-4c72-85b3-3fc3dac16ad3-calico-apiserver-certs\") pod \"calico-apiserver-74759c9dd8-qc6jz\" (UID: \"32a03057-86d8-4c72-85b3-3fc3dac16ad3\") " pod="calico-apiserver/calico-apiserver-74759c9dd8-qc6jz" Oct 9 07:17:16.026432 kubelet[2567]: I1009 07:17:16.026321 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4b59w\" (UniqueName: \"kubernetes.io/projected/2427821b-fc27-4c33-89b4-e2a95dea5dad-kube-api-access-4b59w\") pod \"calico-apiserver-74759c9dd8-dbsvj\" (UID: \"2427821b-fc27-4c33-89b4-e2a95dea5dad\") " pod="calico-apiserver/calico-apiserver-74759c9dd8-dbsvj" Oct 9 07:17:16.026432 kubelet[2567]: I1009 07:17:16.026366 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2427821b-fc27-4c33-89b4-e2a95dea5dad-calico-apiserver-certs\") pod \"calico-apiserver-74759c9dd8-dbsvj\" (UID: \"2427821b-fc27-4c33-89b4-e2a95dea5dad\") " pod="calico-apiserver/calico-apiserver-74759c9dd8-dbsvj" Oct 9 07:17:16.026432 kubelet[2567]: I1009 07:17:16.026402 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqzv9\" (UniqueName: \"kubernetes.io/projected/32a03057-86d8-4c72-85b3-3fc3dac16ad3-kube-api-access-hqzv9\") pod \"calico-apiserver-74759c9dd8-qc6jz\" (UID: \"32a03057-86d8-4c72-85b3-3fc3dac16ad3\") " pod="calico-apiserver/calico-apiserver-74759c9dd8-qc6jz" Oct 9 07:17:16.265759 containerd[1457]: time="2024-10-09T07:17:16.265660402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74759c9dd8-qc6jz,Uid:32a03057-86d8-4c72-85b3-3fc3dac16ad3,Namespace:calico-apiserver,Attempt:0,}" Oct 9 07:17:16.288924 containerd[1457]: time="2024-10-09T07:17:16.288434027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74759c9dd8-dbsvj,Uid:2427821b-fc27-4c33-89b4-e2a95dea5dad,Namespace:calico-apiserver,Attempt:0,}" Oct 9 07:17:16.572179 systemd-networkd[1373]: cali81497c55268: Link UP Oct 9 07:17:16.573138 systemd-networkd[1373]: cali81497c55268: Gained carrier Oct 9 07:17:16.608216 containerd[1457]: 2024-10-09 07:17:16.389 [INFO][4450] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975--2--2--4--dcc5873578.novalocal-k8s-calico--apiserver--74759c9dd8--qc6jz-eth0 calico-apiserver-74759c9dd8- calico-apiserver 32a03057-86d8-4c72-85b3-3fc3dac16ad3 864 0 2024-10-09 07:17:15 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:74759c9dd8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3975-2-2-4-dcc5873578.novalocal calico-apiserver-74759c9dd8-qc6jz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali81497c55268 [] []}} ContainerID="24cb9c23ab53bcb96c3a2cef300cfab94b3e5f9219be872e6f2003f411310788" Namespace="calico-apiserver" Pod="calico-apiserver-74759c9dd8-qc6jz" WorkloadEndpoint="ci--3975--2--2--4--dcc5873578.novalocal-k8s-calico--apiserver--74759c9dd8--qc6jz-" Oct 9 07:17:16.608216 containerd[1457]: 2024-10-09 07:17:16.389 [INFO][4450] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="24cb9c23ab53bcb96c3a2cef300cfab94b3e5f9219be872e6f2003f411310788" Namespace="calico-apiserver" Pod="calico-apiserver-74759c9dd8-qc6jz" WorkloadEndpoint="ci--3975--2--2--4--dcc5873578.novalocal-k8s-calico--apiserver--74759c9dd8--qc6jz-eth0" Oct 9 07:17:16.608216 containerd[1457]: 2024-10-09 07:17:16.441 [INFO][4471] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="24cb9c23ab53bcb96c3a2cef300cfab94b3e5f9219be872e6f2003f411310788" HandleID="k8s-pod-network.24cb9c23ab53bcb96c3a2cef300cfab94b3e5f9219be872e6f2003f411310788" Workload="ci--3975--2--2--4--dcc5873578.novalocal-k8s-calico--apiserver--74759c9dd8--qc6jz-eth0" Oct 9 07:17:16.608216 containerd[1457]: 2024-10-09 07:17:16.457 [INFO][4471] ipam_plugin.go 270: Auto assigning IP ContainerID="24cb9c23ab53bcb96c3a2cef300cfab94b3e5f9219be872e6f2003f411310788" HandleID="k8s-pod-network.24cb9c23ab53bcb96c3a2cef300cfab94b3e5f9219be872e6f2003f411310788" Workload="ci--3975--2--2--4--dcc5873578.novalocal-k8s-calico--apiserver--74759c9dd8--qc6jz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290e50), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3975-2-2-4-dcc5873578.novalocal", "pod":"calico-apiserver-74759c9dd8-qc6jz", "timestamp":"2024-10-09 07:17:16.441106467 +0000 UTC"}, Hostname:"ci-3975-2-2-4-dcc5873578.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 07:17:16.608216 containerd[1457]: 2024-10-09 07:17:16.457 [INFO][4471] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:17:16.608216 containerd[1457]: 2024-10-09 07:17:16.457 [INFO][4471] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:17:16.608216 containerd[1457]: 2024-10-09 07:17:16.457 [INFO][4471] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975-2-2-4-dcc5873578.novalocal' Oct 9 07:17:16.608216 containerd[1457]: 2024-10-09 07:17:16.461 [INFO][4471] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.24cb9c23ab53bcb96c3a2cef300cfab94b3e5f9219be872e6f2003f411310788" host="ci-3975-2-2-4-dcc5873578.novalocal" Oct 9 07:17:16.608216 containerd[1457]: 2024-10-09 07:17:16.466 [INFO][4471] ipam.go 372: Looking up existing affinities for host host="ci-3975-2-2-4-dcc5873578.novalocal" Oct 9 07:17:16.608216 containerd[1457]: 2024-10-09 07:17:16.472 [INFO][4471] ipam.go 489: Trying affinity for 192.168.101.0/26 host="ci-3975-2-2-4-dcc5873578.novalocal" Oct 9 07:17:16.608216 containerd[1457]: 2024-10-09 07:17:16.475 [INFO][4471] ipam.go 155: Attempting to load block cidr=192.168.101.0/26 host="ci-3975-2-2-4-dcc5873578.novalocal" Oct 9 07:17:16.608216 containerd[1457]: 2024-10-09 07:17:16.485 [INFO][4471] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.101.0/26 host="ci-3975-2-2-4-dcc5873578.novalocal" Oct 9 07:17:16.608216 containerd[1457]: 2024-10-09 07:17:16.485 [INFO][4471] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.101.0/26 handle="k8s-pod-network.24cb9c23ab53bcb96c3a2cef300cfab94b3e5f9219be872e6f2003f411310788" host="ci-3975-2-2-4-dcc5873578.novalocal" Oct 9 07:17:16.608216 containerd[1457]: 2024-10-09 07:17:16.493 [INFO][4471] ipam.go 1685: Creating new handle: k8s-pod-network.24cb9c23ab53bcb96c3a2cef300cfab94b3e5f9219be872e6f2003f411310788 Oct 9 07:17:16.608216 containerd[1457]: 2024-10-09 07:17:16.527 [INFO][4471] ipam.go 1203: Writing block in order to claim IPs block=192.168.101.0/26 handle="k8s-pod-network.24cb9c23ab53bcb96c3a2cef300cfab94b3e5f9219be872e6f2003f411310788" host="ci-3975-2-2-4-dcc5873578.novalocal" Oct 9 07:17:16.608216 containerd[1457]: 2024-10-09 07:17:16.556 [INFO][4471] ipam.go 1216: Successfully claimed IPs: [192.168.101.5/26] block=192.168.101.0/26 handle="k8s-pod-network.24cb9c23ab53bcb96c3a2cef300cfab94b3e5f9219be872e6f2003f411310788" host="ci-3975-2-2-4-dcc5873578.novalocal" Oct 9 07:17:16.608216 containerd[1457]: 2024-10-09 07:17:16.557 [INFO][4471] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.101.5/26] handle="k8s-pod-network.24cb9c23ab53bcb96c3a2cef300cfab94b3e5f9219be872e6f2003f411310788" host="ci-3975-2-2-4-dcc5873578.novalocal" Oct 9 07:17:16.608216 containerd[1457]: 2024-10-09 07:17:16.557 [INFO][4471] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:17:16.608216 containerd[1457]: 2024-10-09 07:17:16.557 [INFO][4471] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.101.5/26] IPv6=[] ContainerID="24cb9c23ab53bcb96c3a2cef300cfab94b3e5f9219be872e6f2003f411310788" HandleID="k8s-pod-network.24cb9c23ab53bcb96c3a2cef300cfab94b3e5f9219be872e6f2003f411310788" Workload="ci--3975--2--2--4--dcc5873578.novalocal-k8s-calico--apiserver--74759c9dd8--qc6jz-eth0" Oct 9 07:17:16.609145 containerd[1457]: 2024-10-09 07:17:16.562 [INFO][4450] k8s.go 386: Populated endpoint ContainerID="24cb9c23ab53bcb96c3a2cef300cfab94b3e5f9219be872e6f2003f411310788" Namespace="calico-apiserver" Pod="calico-apiserver-74759c9dd8-qc6jz" WorkloadEndpoint="ci--3975--2--2--4--dcc5873578.novalocal-k8s-calico--apiserver--74759c9dd8--qc6jz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--4--dcc5873578.novalocal-k8s-calico--apiserver--74759c9dd8--qc6jz-eth0", GenerateName:"calico-apiserver-74759c9dd8-", Namespace:"calico-apiserver", SelfLink:"", UID:"32a03057-86d8-4c72-85b3-3fc3dac16ad3", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 17, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74759c9dd8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-4-dcc5873578.novalocal", ContainerID:"", Pod:"calico-apiserver-74759c9dd8-qc6jz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.101.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali81497c55268", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:17:16.609145 containerd[1457]: 2024-10-09 07:17:16.562 [INFO][4450] k8s.go 387: Calico CNI using IPs: [192.168.101.5/32] ContainerID="24cb9c23ab53bcb96c3a2cef300cfab94b3e5f9219be872e6f2003f411310788" Namespace="calico-apiserver" Pod="calico-apiserver-74759c9dd8-qc6jz" WorkloadEndpoint="ci--3975--2--2--4--dcc5873578.novalocal-k8s-calico--apiserver--74759c9dd8--qc6jz-eth0" Oct 9 07:17:16.609145 containerd[1457]: 2024-10-09 07:17:16.563 [INFO][4450] dataplane_linux.go 68: Setting the host side veth name to cali81497c55268 ContainerID="24cb9c23ab53bcb96c3a2cef300cfab94b3e5f9219be872e6f2003f411310788" Namespace="calico-apiserver" Pod="calico-apiserver-74759c9dd8-qc6jz" WorkloadEndpoint="ci--3975--2--2--4--dcc5873578.novalocal-k8s-calico--apiserver--74759c9dd8--qc6jz-eth0" Oct 9 07:17:16.609145 containerd[1457]: 2024-10-09 07:17:16.566 [INFO][4450] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="24cb9c23ab53bcb96c3a2cef300cfab94b3e5f9219be872e6f2003f411310788" Namespace="calico-apiserver" Pod="calico-apiserver-74759c9dd8-qc6jz" WorkloadEndpoint="ci--3975--2--2--4--dcc5873578.novalocal-k8s-calico--apiserver--74759c9dd8--qc6jz-eth0" Oct 9 07:17:16.609145 containerd[1457]: 2024-10-09 07:17:16.568 [INFO][4450] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="24cb9c23ab53bcb96c3a2cef300cfab94b3e5f9219be872e6f2003f411310788" Namespace="calico-apiserver" Pod="calico-apiserver-74759c9dd8-qc6jz" WorkloadEndpoint="ci--3975--2--2--4--dcc5873578.novalocal-k8s-calico--apiserver--74759c9dd8--qc6jz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--4--dcc5873578.novalocal-k8s-calico--apiserver--74759c9dd8--qc6jz-eth0", GenerateName:"calico-apiserver-74759c9dd8-", Namespace:"calico-apiserver", SelfLink:"", UID:"32a03057-86d8-4c72-85b3-3fc3dac16ad3", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 17, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74759c9dd8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-4-dcc5873578.novalocal", ContainerID:"24cb9c23ab53bcb96c3a2cef300cfab94b3e5f9219be872e6f2003f411310788", Pod:"calico-apiserver-74759c9dd8-qc6jz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.101.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali81497c55268", MAC:"96:dc:c8:d6:b6:55", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:17:16.609145 containerd[1457]: 2024-10-09 07:17:16.595 [INFO][4450] k8s.go 500: Wrote updated endpoint to datastore ContainerID="24cb9c23ab53bcb96c3a2cef300cfab94b3e5f9219be872e6f2003f411310788" Namespace="calico-apiserver" Pod="calico-apiserver-74759c9dd8-qc6jz" WorkloadEndpoint="ci--3975--2--2--4--dcc5873578.novalocal-k8s-calico--apiserver--74759c9dd8--qc6jz-eth0" Oct 9 07:17:16.665531 containerd[1457]: time="2024-10-09T07:17:16.665397211Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:17:16.665681 containerd[1457]: time="2024-10-09T07:17:16.665568517Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:17:16.665681 containerd[1457]: time="2024-10-09T07:17:16.665628771Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:17:16.665754 containerd[1457]: time="2024-10-09T07:17:16.665667615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:17:16.695517 systemd-networkd[1373]: calib1ae446f24e: Link UP Oct 9 07:17:16.695712 systemd-networkd[1373]: calib1ae446f24e: Gained carrier Oct 9 07:17:16.746717 systemd[1]: Started cri-containerd-24cb9c23ab53bcb96c3a2cef300cfab94b3e5f9219be872e6f2003f411310788.scope - libcontainer container 24cb9c23ab53bcb96c3a2cef300cfab94b3e5f9219be872e6f2003f411310788. Oct 9 07:17:16.750666 containerd[1457]: 2024-10-09 07:17:16.383 [INFO][4458] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975--2--2--4--dcc5873578.novalocal-k8s-calico--apiserver--74759c9dd8--dbsvj-eth0 calico-apiserver-74759c9dd8- calico-apiserver 2427821b-fc27-4c33-89b4-e2a95dea5dad 866 0 2024-10-09 07:17:15 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:74759c9dd8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3975-2-2-4-dcc5873578.novalocal calico-apiserver-74759c9dd8-dbsvj eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib1ae446f24e [] []}} ContainerID="4993516bb96051dc50916ae050d585675bc1db42747b47a3e9bc650bad96c0af" Namespace="calico-apiserver" Pod="calico-apiserver-74759c9dd8-dbsvj" WorkloadEndpoint="ci--3975--2--2--4--dcc5873578.novalocal-k8s-calico--apiserver--74759c9dd8--dbsvj-" Oct 9 07:17:16.750666 containerd[1457]: 2024-10-09 07:17:16.383 [INFO][4458] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4993516bb96051dc50916ae050d585675bc1db42747b47a3e9bc650bad96c0af" Namespace="calico-apiserver" Pod="calico-apiserver-74759c9dd8-dbsvj" WorkloadEndpoint="ci--3975--2--2--4--dcc5873578.novalocal-k8s-calico--apiserver--74759c9dd8--dbsvj-eth0" Oct 9 07:17:16.750666 containerd[1457]: 2024-10-09 07:17:16.447 [INFO][4472] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4993516bb96051dc50916ae050d585675bc1db42747b47a3e9bc650bad96c0af" HandleID="k8s-pod-network.4993516bb96051dc50916ae050d585675bc1db42747b47a3e9bc650bad96c0af" Workload="ci--3975--2--2--4--dcc5873578.novalocal-k8s-calico--apiserver--74759c9dd8--dbsvj-eth0" Oct 9 07:17:16.750666 containerd[1457]: 2024-10-09 07:17:16.461 [INFO][4472] ipam_plugin.go 270: Auto assigning IP ContainerID="4993516bb96051dc50916ae050d585675bc1db42747b47a3e9bc650bad96c0af" HandleID="k8s-pod-network.4993516bb96051dc50916ae050d585675bc1db42747b47a3e9bc650bad96c0af" Workload="ci--3975--2--2--4--dcc5873578.novalocal-k8s-calico--apiserver--74759c9dd8--dbsvj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000302470), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3975-2-2-4-dcc5873578.novalocal", "pod":"calico-apiserver-74759c9dd8-dbsvj", "timestamp":"2024-10-09 07:17:16.44714235 +0000 UTC"}, Hostname:"ci-3975-2-2-4-dcc5873578.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 07:17:16.750666 containerd[1457]: 2024-10-09 07:17:16.462 [INFO][4472] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:17:16.750666 containerd[1457]: 2024-10-09 07:17:16.557 [INFO][4472] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:17:16.750666 containerd[1457]: 2024-10-09 07:17:16.557 [INFO][4472] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975-2-2-4-dcc5873578.novalocal' Oct 9 07:17:16.750666 containerd[1457]: 2024-10-09 07:17:16.563 [INFO][4472] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4993516bb96051dc50916ae050d585675bc1db42747b47a3e9bc650bad96c0af" host="ci-3975-2-2-4-dcc5873578.novalocal" Oct 9 07:17:16.750666 containerd[1457]: 2024-10-09 07:17:16.592 [INFO][4472] ipam.go 372: Looking up existing affinities for host host="ci-3975-2-2-4-dcc5873578.novalocal" Oct 9 07:17:16.750666 containerd[1457]: 2024-10-09 07:17:16.629 [INFO][4472] ipam.go 489: Trying affinity for 192.168.101.0/26 host="ci-3975-2-2-4-dcc5873578.novalocal" Oct 9 07:17:16.750666 containerd[1457]: 2024-10-09 07:17:16.635 [INFO][4472] ipam.go 155: Attempting to load block cidr=192.168.101.0/26 host="ci-3975-2-2-4-dcc5873578.novalocal" Oct 9 07:17:16.750666 containerd[1457]: 2024-10-09 07:17:16.645 [INFO][4472] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.101.0/26 host="ci-3975-2-2-4-dcc5873578.novalocal" Oct 9 07:17:16.750666 containerd[1457]: 2024-10-09 07:17:16.645 [INFO][4472] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.101.0/26 handle="k8s-pod-network.4993516bb96051dc50916ae050d585675bc1db42747b47a3e9bc650bad96c0af" host="ci-3975-2-2-4-dcc5873578.novalocal" Oct 9 07:17:16.750666 containerd[1457]: 2024-10-09 07:17:16.649 [INFO][4472] ipam.go 1685: Creating new handle: k8s-pod-network.4993516bb96051dc50916ae050d585675bc1db42747b47a3e9bc650bad96c0af Oct 9 07:17:16.750666 containerd[1457]: 2024-10-09 07:17:16.661 [INFO][4472] ipam.go 1203: Writing block in order to claim IPs block=192.168.101.0/26 handle="k8s-pod-network.4993516bb96051dc50916ae050d585675bc1db42747b47a3e9bc650bad96c0af" host="ci-3975-2-2-4-dcc5873578.novalocal" Oct 9 07:17:16.750666 containerd[1457]: 2024-10-09 07:17:16.685 [INFO][4472] ipam.go 1216: Successfully claimed IPs: [192.168.101.6/26] block=192.168.101.0/26 handle="k8s-pod-network.4993516bb96051dc50916ae050d585675bc1db42747b47a3e9bc650bad96c0af" host="ci-3975-2-2-4-dcc5873578.novalocal" Oct 9 07:17:16.750666 containerd[1457]: 2024-10-09 07:17:16.685 [INFO][4472] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.101.6/26] handle="k8s-pod-network.4993516bb96051dc50916ae050d585675bc1db42747b47a3e9bc650bad96c0af" host="ci-3975-2-2-4-dcc5873578.novalocal" Oct 9 07:17:16.750666 containerd[1457]: 2024-10-09 07:17:16.685 [INFO][4472] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:17:16.750666 containerd[1457]: 2024-10-09 07:17:16.685 [INFO][4472] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.101.6/26] IPv6=[] ContainerID="4993516bb96051dc50916ae050d585675bc1db42747b47a3e9bc650bad96c0af" HandleID="k8s-pod-network.4993516bb96051dc50916ae050d585675bc1db42747b47a3e9bc650bad96c0af" Workload="ci--3975--2--2--4--dcc5873578.novalocal-k8s-calico--apiserver--74759c9dd8--dbsvj-eth0" Oct 9 07:17:16.751704 containerd[1457]: 2024-10-09 07:17:16.689 [INFO][4458] k8s.go 386: Populated endpoint ContainerID="4993516bb96051dc50916ae050d585675bc1db42747b47a3e9bc650bad96c0af" Namespace="calico-apiserver" Pod="calico-apiserver-74759c9dd8-dbsvj" WorkloadEndpoint="ci--3975--2--2--4--dcc5873578.novalocal-k8s-calico--apiserver--74759c9dd8--dbsvj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--4--dcc5873578.novalocal-k8s-calico--apiserver--74759c9dd8--dbsvj-eth0", GenerateName:"calico-apiserver-74759c9dd8-", Namespace:"calico-apiserver", SelfLink:"", UID:"2427821b-fc27-4c33-89b4-e2a95dea5dad", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 17, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74759c9dd8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-4-dcc5873578.novalocal", ContainerID:"", Pod:"calico-apiserver-74759c9dd8-dbsvj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.101.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib1ae446f24e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:17:16.751704 containerd[1457]: 2024-10-09 07:17:16.690 [INFO][4458] k8s.go 387: Calico CNI using IPs: [192.168.101.6/32] ContainerID="4993516bb96051dc50916ae050d585675bc1db42747b47a3e9bc650bad96c0af" Namespace="calico-apiserver" Pod="calico-apiserver-74759c9dd8-dbsvj" WorkloadEndpoint="ci--3975--2--2--4--dcc5873578.novalocal-k8s-calico--apiserver--74759c9dd8--dbsvj-eth0" Oct 9 07:17:16.751704 containerd[1457]: 2024-10-09 07:17:16.690 [INFO][4458] dataplane_linux.go 68: Setting the host side veth name to calib1ae446f24e ContainerID="4993516bb96051dc50916ae050d585675bc1db42747b47a3e9bc650bad96c0af" Namespace="calico-apiserver" Pod="calico-apiserver-74759c9dd8-dbsvj" WorkloadEndpoint="ci--3975--2--2--4--dcc5873578.novalocal-k8s-calico--apiserver--74759c9dd8--dbsvj-eth0" Oct 9 07:17:16.751704 containerd[1457]: 2024-10-09 07:17:16.693 [INFO][4458] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="4993516bb96051dc50916ae050d585675bc1db42747b47a3e9bc650bad96c0af" Namespace="calico-apiserver" Pod="calico-apiserver-74759c9dd8-dbsvj" WorkloadEndpoint="ci--3975--2--2--4--dcc5873578.novalocal-k8s-calico--apiserver--74759c9dd8--dbsvj-eth0" Oct 9 07:17:16.751704 containerd[1457]: 2024-10-09 07:17:16.693 [INFO][4458] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4993516bb96051dc50916ae050d585675bc1db42747b47a3e9bc650bad96c0af" Namespace="calico-apiserver" Pod="calico-apiserver-74759c9dd8-dbsvj" WorkloadEndpoint="ci--3975--2--2--4--dcc5873578.novalocal-k8s-calico--apiserver--74759c9dd8--dbsvj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--4--dcc5873578.novalocal-k8s-calico--apiserver--74759c9dd8--dbsvj-eth0", GenerateName:"calico-apiserver-74759c9dd8-", Namespace:"calico-apiserver", SelfLink:"", UID:"2427821b-fc27-4c33-89b4-e2a95dea5dad", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 17, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74759c9dd8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-4-dcc5873578.novalocal", ContainerID:"4993516bb96051dc50916ae050d585675bc1db42747b47a3e9bc650bad96c0af", Pod:"calico-apiserver-74759c9dd8-dbsvj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.101.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib1ae446f24e", MAC:"86:9f:6f:ba:b3:b2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:17:16.751704 containerd[1457]: 2024-10-09 07:17:16.723 [INFO][4458] k8s.go 500: Wrote updated endpoint to datastore ContainerID="4993516bb96051dc50916ae050d585675bc1db42747b47a3e9bc650bad96c0af" Namespace="calico-apiserver" Pod="calico-apiserver-74759c9dd8-dbsvj" WorkloadEndpoint="ci--3975--2--2--4--dcc5873578.novalocal-k8s-calico--apiserver--74759c9dd8--dbsvj-eth0" Oct 9 07:17:16.788693 containerd[1457]: time="2024-10-09T07:17:16.788152314Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:17:16.788693 containerd[1457]: time="2024-10-09T07:17:16.788248576Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:17:16.789436 containerd[1457]: time="2024-10-09T07:17:16.788406456Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:17:16.789701 containerd[1457]: time="2024-10-09T07:17:16.789569431Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:17:16.838247 systemd[1]: Started cri-containerd-4993516bb96051dc50916ae050d585675bc1db42747b47a3e9bc650bad96c0af.scope - libcontainer container 4993516bb96051dc50916ae050d585675bc1db42747b47a3e9bc650bad96c0af. Oct 9 07:17:16.910439 containerd[1457]: time="2024-10-09T07:17:16.910399493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74759c9dd8-qc6jz,Uid:32a03057-86d8-4c72-85b3-3fc3dac16ad3,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"24cb9c23ab53bcb96c3a2cef300cfab94b3e5f9219be872e6f2003f411310788\"" Oct 9 07:17:16.914004 containerd[1457]: time="2024-10-09T07:17:16.913419508Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Oct 9 07:17:16.935839 containerd[1457]: time="2024-10-09T07:17:16.934974192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74759c9dd8-dbsvj,Uid:2427821b-fc27-4c33-89b4-e2a95dea5dad,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"4993516bb96051dc50916ae050d585675bc1db42747b47a3e9bc650bad96c0af\"" Oct 9 07:17:17.470507 containerd[1457]: time="2024-10-09T07:17:17.470435711Z" level=info msg="StopPodSandbox for \"324dcf5dd5358bfe635df0110f9652c27d0f872ee67a8cd3ca15349a7c1ee308\"" Oct 9 07:17:17.591905 containerd[1457]: 2024-10-09 07:17:17.532 [WARNING][4608] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="324dcf5dd5358bfe635df0110f9652c27d0f872ee67a8cd3ca15349a7c1ee308" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--4--dcc5873578.novalocal-k8s-coredns--6f6b679f8f--pckdt-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"6e2f0e11-0f1c-419e-b192-8ef6ffe93a48", ResourceVersion:"756", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 16, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-4-dcc5873578.novalocal", ContainerID:"7c150999eff860cb6e415ca681733f8307396114b24a38ed6a288584c3028365", Pod:"coredns-6f6b679f8f-pckdt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.101.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali09e545f1519", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:17:17.591905 containerd[1457]: 2024-10-09 07:17:17.532 [INFO][4608] k8s.go 608: Cleaning up netns ContainerID="324dcf5dd5358bfe635df0110f9652c27d0f872ee67a8cd3ca15349a7c1ee308" Oct 9 07:17:17.591905 containerd[1457]: 2024-10-09 07:17:17.532 [INFO][4608] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="324dcf5dd5358bfe635df0110f9652c27d0f872ee67a8cd3ca15349a7c1ee308" iface="eth0" netns="" Oct 9 07:17:17.591905 containerd[1457]: 2024-10-09 07:17:17.532 [INFO][4608] k8s.go 615: Releasing IP address(es) ContainerID="324dcf5dd5358bfe635df0110f9652c27d0f872ee67a8cd3ca15349a7c1ee308" Oct 9 07:17:17.591905 containerd[1457]: 2024-10-09 07:17:17.532 [INFO][4608] utils.go 188: Calico CNI releasing IP address ContainerID="324dcf5dd5358bfe635df0110f9652c27d0f872ee67a8cd3ca15349a7c1ee308" Oct 9 07:17:17.591905 containerd[1457]: 2024-10-09 07:17:17.568 [INFO][4614] ipam_plugin.go 417: Releasing address using handleID ContainerID="324dcf5dd5358bfe635df0110f9652c27d0f872ee67a8cd3ca15349a7c1ee308" HandleID="k8s-pod-network.324dcf5dd5358bfe635df0110f9652c27d0f872ee67a8cd3ca15349a7c1ee308" Workload="ci--3975--2--2--4--dcc5873578.novalocal-k8s-coredns--6f6b679f8f--pckdt-eth0" Oct 9 07:17:17.591905 containerd[1457]: 2024-10-09 07:17:17.568 [INFO][4614] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:17:17.591905 containerd[1457]: 2024-10-09 07:17:17.568 [INFO][4614] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:17:17.591905 containerd[1457]: 2024-10-09 07:17:17.586 [WARNING][4614] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="324dcf5dd5358bfe635df0110f9652c27d0f872ee67a8cd3ca15349a7c1ee308" HandleID="k8s-pod-network.324dcf5dd5358bfe635df0110f9652c27d0f872ee67a8cd3ca15349a7c1ee308" Workload="ci--3975--2--2--4--dcc5873578.novalocal-k8s-coredns--6f6b679f8f--pckdt-eth0" Oct 9 07:17:17.591905 containerd[1457]: 2024-10-09 07:17:17.586 [INFO][4614] ipam_plugin.go 445: Releasing address using workloadID ContainerID="324dcf5dd5358bfe635df0110f9652c27d0f872ee67a8cd3ca15349a7c1ee308" HandleID="k8s-pod-network.324dcf5dd5358bfe635df0110f9652c27d0f872ee67a8cd3ca15349a7c1ee308" Workload="ci--3975--2--2--4--dcc5873578.novalocal-k8s-coredns--6f6b679f8f--pckdt-eth0" Oct 9 07:17:17.591905 containerd[1457]: 2024-10-09 07:17:17.588 [INFO][4614] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:17:17.591905 containerd[1457]: 2024-10-09 07:17:17.590 [INFO][4608] k8s.go 621: Teardown processing complete. ContainerID="324dcf5dd5358bfe635df0110f9652c27d0f872ee67a8cd3ca15349a7c1ee308" Oct 9 07:17:17.591905 containerd[1457]: time="2024-10-09T07:17:17.591543923Z" level=info msg="TearDown network for sandbox \"324dcf5dd5358bfe635df0110f9652c27d0f872ee67a8cd3ca15349a7c1ee308\" successfully" Oct 9 07:17:17.591905 containerd[1457]: time="2024-10-09T07:17:17.591596072Z" level=info msg="StopPodSandbox for \"324dcf5dd5358bfe635df0110f9652c27d0f872ee67a8cd3ca15349a7c1ee308\" returns successfully" Oct 9 07:17:17.596556 containerd[1457]: time="2024-10-09T07:17:17.596508052Z" level=info msg="RemovePodSandbox for \"324dcf5dd5358bfe635df0110f9652c27d0f872ee67a8cd3ca15349a7c1ee308\"" Oct 9 07:17:17.596638 containerd[1457]: time="2024-10-09T07:17:17.596571493Z" level=info msg="Forcibly stopping sandbox \"324dcf5dd5358bfe635df0110f9652c27d0f872ee67a8cd3ca15349a7c1ee308\"" Oct 9 07:17:17.693457 containerd[1457]: 2024-10-09 07:17:17.648 [WARNING][4632] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="324dcf5dd5358bfe635df0110f9652c27d0f872ee67a8cd3ca15349a7c1ee308" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--4--dcc5873578.novalocal-k8s-coredns--6f6b679f8f--pckdt-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"6e2f0e11-0f1c-419e-b192-8ef6ffe93a48", ResourceVersion:"756", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 16, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-4-dcc5873578.novalocal", ContainerID:"7c150999eff860cb6e415ca681733f8307396114b24a38ed6a288584c3028365", Pod:"coredns-6f6b679f8f-pckdt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.101.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali09e545f1519", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:17:17.693457 containerd[1457]: 2024-10-09 07:17:17.649 [INFO][4632] k8s.go 608: Cleaning up netns ContainerID="324dcf5dd5358bfe635df0110f9652c27d0f872ee67a8cd3ca15349a7c1ee308" Oct 9 07:17:17.693457 containerd[1457]: 2024-10-09 07:17:17.649 [INFO][4632] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="324dcf5dd5358bfe635df0110f9652c27d0f872ee67a8cd3ca15349a7c1ee308" iface="eth0" netns="" Oct 9 07:17:17.693457 containerd[1457]: 2024-10-09 07:17:17.649 [INFO][4632] k8s.go 615: Releasing IP address(es) ContainerID="324dcf5dd5358bfe635df0110f9652c27d0f872ee67a8cd3ca15349a7c1ee308" Oct 9 07:17:17.693457 containerd[1457]: 2024-10-09 07:17:17.649 [INFO][4632] utils.go 188: Calico CNI releasing IP address ContainerID="324dcf5dd5358bfe635df0110f9652c27d0f872ee67a8cd3ca15349a7c1ee308" Oct 9 07:17:17.693457 containerd[1457]: 2024-10-09 07:17:17.681 [INFO][4638] ipam_plugin.go 417: Releasing address using handleID ContainerID="324dcf5dd5358bfe635df0110f9652c27d0f872ee67a8cd3ca15349a7c1ee308" HandleID="k8s-pod-network.324dcf5dd5358bfe635df0110f9652c27d0f872ee67a8cd3ca15349a7c1ee308" Workload="ci--3975--2--2--4--dcc5873578.novalocal-k8s-coredns--6f6b679f8f--pckdt-eth0" Oct 9 07:17:17.693457 containerd[1457]: 2024-10-09 07:17:17.681 [INFO][4638] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:17:17.693457 containerd[1457]: 2024-10-09 07:17:17.681 [INFO][4638] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:17:17.693457 containerd[1457]: 2024-10-09 07:17:17.688 [WARNING][4638] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="324dcf5dd5358bfe635df0110f9652c27d0f872ee67a8cd3ca15349a7c1ee308" HandleID="k8s-pod-network.324dcf5dd5358bfe635df0110f9652c27d0f872ee67a8cd3ca15349a7c1ee308" Workload="ci--3975--2--2--4--dcc5873578.novalocal-k8s-coredns--6f6b679f8f--pckdt-eth0" Oct 9 07:17:17.693457 containerd[1457]: 2024-10-09 07:17:17.688 [INFO][4638] ipam_plugin.go 445: Releasing address using workloadID ContainerID="324dcf5dd5358bfe635df0110f9652c27d0f872ee67a8cd3ca15349a7c1ee308" HandleID="k8s-pod-network.324dcf5dd5358bfe635df0110f9652c27d0f872ee67a8cd3ca15349a7c1ee308" Workload="ci--3975--2--2--4--dcc5873578.novalocal-k8s-coredns--6f6b679f8f--pckdt-eth0" Oct 9 07:17:17.693457 containerd[1457]: 2024-10-09 07:17:17.690 [INFO][4638] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:17:17.693457 containerd[1457]: 2024-10-09 07:17:17.691 [INFO][4632] k8s.go 621: Teardown processing complete. ContainerID="324dcf5dd5358bfe635df0110f9652c27d0f872ee67a8cd3ca15349a7c1ee308" Oct 9 07:17:17.693457 containerd[1457]: time="2024-10-09T07:17:17.693449130Z" level=info msg="TearDown network for sandbox \"324dcf5dd5358bfe635df0110f9652c27d0f872ee67a8cd3ca15349a7c1ee308\" successfully" Oct 9 07:17:17.706978 containerd[1457]: time="2024-10-09T07:17:17.706917026Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"324dcf5dd5358bfe635df0110f9652c27d0f872ee67a8cd3ca15349a7c1ee308\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 07:17:17.707247 containerd[1457]: time="2024-10-09T07:17:17.707009040Z" level=info msg="RemovePodSandbox \"324dcf5dd5358bfe635df0110f9652c27d0f872ee67a8cd3ca15349a7c1ee308\" returns successfully" Oct 9 07:17:17.708146 containerd[1457]: time="2024-10-09T07:17:17.707698587Z" level=info msg="StopPodSandbox for \"1123b521fba55693557e7f6f72a87f9837e0f978126665e7e9e30a28d3c5313f\"" Oct 9 07:17:17.826100 containerd[1457]: 2024-10-09 07:17:17.757 [WARNING][4656] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1123b521fba55693557e7f6f72a87f9837e0f978126665e7e9e30a28d3c5313f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--4--dcc5873578.novalocal-k8s-csi--node--driver--cpr56-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5460cbf3-3220-44d8-92e5-2d3cb02a666f", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 16, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"779867c8f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-4-dcc5873578.novalocal", ContainerID:"4324a4fe1e390d993c84b2201aa5b7c694a7ffdfa52ac2c488684f9f03768405", Pod:"csi-node-driver-cpr56", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.101.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"caliab141e647bd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:17:17.826100 containerd[1457]: 2024-10-09 07:17:17.759 [INFO][4656] k8s.go 608: Cleaning up netns ContainerID="1123b521fba55693557e7f6f72a87f9837e0f978126665e7e9e30a28d3c5313f" Oct 9 07:17:17.826100 containerd[1457]: 2024-10-09 07:17:17.759 [INFO][4656] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="1123b521fba55693557e7f6f72a87f9837e0f978126665e7e9e30a28d3c5313f" iface="eth0" netns="" Oct 9 07:17:17.826100 containerd[1457]: 2024-10-09 07:17:17.759 [INFO][4656] k8s.go 615: Releasing IP address(es) ContainerID="1123b521fba55693557e7f6f72a87f9837e0f978126665e7e9e30a28d3c5313f" Oct 9 07:17:17.826100 containerd[1457]: 2024-10-09 07:17:17.759 [INFO][4656] utils.go 188: Calico CNI releasing IP address ContainerID="1123b521fba55693557e7f6f72a87f9837e0f978126665e7e9e30a28d3c5313f" Oct 9 07:17:17.826100 containerd[1457]: 2024-10-09 07:17:17.806 [INFO][4662] ipam_plugin.go 417: Releasing address using handleID ContainerID="1123b521fba55693557e7f6f72a87f9837e0f978126665e7e9e30a28d3c5313f" HandleID="k8s-pod-network.1123b521fba55693557e7f6f72a87f9837e0f978126665e7e9e30a28d3c5313f" Workload="ci--3975--2--2--4--dcc5873578.novalocal-k8s-csi--node--driver--cpr56-eth0" Oct 9 07:17:17.826100 containerd[1457]: 2024-10-09 07:17:17.806 [INFO][4662] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:17:17.826100 containerd[1457]: 2024-10-09 07:17:17.806 [INFO][4662] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:17:17.826100 containerd[1457]: 2024-10-09 07:17:17.819 [WARNING][4662] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="1123b521fba55693557e7f6f72a87f9837e0f978126665e7e9e30a28d3c5313f" HandleID="k8s-pod-network.1123b521fba55693557e7f6f72a87f9837e0f978126665e7e9e30a28d3c5313f" Workload="ci--3975--2--2--4--dcc5873578.novalocal-k8s-csi--node--driver--cpr56-eth0" Oct 9 07:17:17.826100 containerd[1457]: 2024-10-09 07:17:17.820 [INFO][4662] ipam_plugin.go 445: Releasing address using workloadID ContainerID="1123b521fba55693557e7f6f72a87f9837e0f978126665e7e9e30a28d3c5313f" HandleID="k8s-pod-network.1123b521fba55693557e7f6f72a87f9837e0f978126665e7e9e30a28d3c5313f" Workload="ci--3975--2--2--4--dcc5873578.novalocal-k8s-csi--node--driver--cpr56-eth0" Oct 9 07:17:17.826100 containerd[1457]: 2024-10-09 07:17:17.821 [INFO][4662] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:17:17.826100 containerd[1457]: 2024-10-09 07:17:17.823 [INFO][4656] k8s.go 621: Teardown processing complete. ContainerID="1123b521fba55693557e7f6f72a87f9837e0f978126665e7e9e30a28d3c5313f" Oct 9 07:17:17.826100 containerd[1457]: time="2024-10-09T07:17:17.825347313Z" level=info msg="TearDown network for sandbox \"1123b521fba55693557e7f6f72a87f9837e0f978126665e7e9e30a28d3c5313f\" successfully" Oct 9 07:17:17.826100 containerd[1457]: time="2024-10-09T07:17:17.825372992Z" level=info msg="StopPodSandbox for \"1123b521fba55693557e7f6f72a87f9837e0f978126665e7e9e30a28d3c5313f\" returns successfully" Oct 9 07:17:17.827704 containerd[1457]: time="2024-10-09T07:17:17.827510192Z" level=info msg="RemovePodSandbox for \"1123b521fba55693557e7f6f72a87f9837e0f978126665e7e9e30a28d3c5313f\"" Oct 9 07:17:17.827704 containerd[1457]: time="2024-10-09T07:17:17.827541993Z" level=info msg="Forcibly stopping sandbox \"1123b521fba55693557e7f6f72a87f9837e0f978126665e7e9e30a28d3c5313f\"" Oct 9 07:17:17.946356 systemd-networkd[1373]: cali81497c55268: Gained IPv6LL Oct 9 07:17:17.955109 containerd[1457]: 2024-10-09 07:17:17.892 [WARNING][4680] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1123b521fba55693557e7f6f72a87f9837e0f978126665e7e9e30a28d3c5313f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--4--dcc5873578.novalocal-k8s-csi--node--driver--cpr56-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5460cbf3-3220-44d8-92e5-2d3cb02a666f", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 16, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"779867c8f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-4-dcc5873578.novalocal", ContainerID:"4324a4fe1e390d993c84b2201aa5b7c694a7ffdfa52ac2c488684f9f03768405", Pod:"csi-node-driver-cpr56", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.101.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"caliab141e647bd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:17:17.955109 containerd[1457]: 2024-10-09 07:17:17.893 [INFO][4680] k8s.go 608: Cleaning up netns ContainerID="1123b521fba55693557e7f6f72a87f9837e0f978126665e7e9e30a28d3c5313f" Oct 9 07:17:17.955109 containerd[1457]: 2024-10-09 07:17:17.893 [INFO][4680] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="1123b521fba55693557e7f6f72a87f9837e0f978126665e7e9e30a28d3c5313f" iface="eth0" netns="" Oct 9 07:17:17.955109 containerd[1457]: 2024-10-09 07:17:17.893 [INFO][4680] k8s.go 615: Releasing IP address(es) ContainerID="1123b521fba55693557e7f6f72a87f9837e0f978126665e7e9e30a28d3c5313f" Oct 9 07:17:17.955109 containerd[1457]: 2024-10-09 07:17:17.893 [INFO][4680] utils.go 188: Calico CNI releasing IP address ContainerID="1123b521fba55693557e7f6f72a87f9837e0f978126665e7e9e30a28d3c5313f" Oct 9 07:17:17.955109 containerd[1457]: 2024-10-09 07:17:17.922 [INFO][4686] ipam_plugin.go 417: Releasing address using handleID ContainerID="1123b521fba55693557e7f6f72a87f9837e0f978126665e7e9e30a28d3c5313f" HandleID="k8s-pod-network.1123b521fba55693557e7f6f72a87f9837e0f978126665e7e9e30a28d3c5313f" Workload="ci--3975--2--2--4--dcc5873578.novalocal-k8s-csi--node--driver--cpr56-eth0" Oct 9 07:17:17.955109 containerd[1457]: 2024-10-09 07:17:17.923 [INFO][4686] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:17:17.955109 containerd[1457]: 2024-10-09 07:17:17.923 [INFO][4686] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:17:17.955109 containerd[1457]: 2024-10-09 07:17:17.936 [WARNING][4686] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="1123b521fba55693557e7f6f72a87f9837e0f978126665e7e9e30a28d3c5313f" HandleID="k8s-pod-network.1123b521fba55693557e7f6f72a87f9837e0f978126665e7e9e30a28d3c5313f" Workload="ci--3975--2--2--4--dcc5873578.novalocal-k8s-csi--node--driver--cpr56-eth0" Oct 9 07:17:17.955109 containerd[1457]: 2024-10-09 07:17:17.937 [INFO][4686] ipam_plugin.go 445: Releasing address using workloadID ContainerID="1123b521fba55693557e7f6f72a87f9837e0f978126665e7e9e30a28d3c5313f" HandleID="k8s-pod-network.1123b521fba55693557e7f6f72a87f9837e0f978126665e7e9e30a28d3c5313f" Workload="ci--3975--2--2--4--dcc5873578.novalocal-k8s-csi--node--driver--cpr56-eth0" Oct 9 07:17:17.955109 containerd[1457]: 2024-10-09 07:17:17.941 [INFO][4686] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:17:17.955109 containerd[1457]: 2024-10-09 07:17:17.948 [INFO][4680] k8s.go 621: Teardown processing complete. ContainerID="1123b521fba55693557e7f6f72a87f9837e0f978126665e7e9e30a28d3c5313f" Oct 9 07:17:17.955109 containerd[1457]: time="2024-10-09T07:17:17.954864995Z" level=info msg="TearDown network for sandbox \"1123b521fba55693557e7f6f72a87f9837e0f978126665e7e9e30a28d3c5313f\" successfully" Oct 9 07:17:17.978592 containerd[1457]: time="2024-10-09T07:17:17.977145184Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1123b521fba55693557e7f6f72a87f9837e0f978126665e7e9e30a28d3c5313f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 07:17:17.978592 containerd[1457]: time="2024-10-09T07:17:17.977397172Z" level=info msg="RemovePodSandbox \"1123b521fba55693557e7f6f72a87f9837e0f978126665e7e9e30a28d3c5313f\" returns successfully" Oct 9 07:17:17.978592 containerd[1457]: time="2024-10-09T07:17:17.978239347Z" level=info msg="StopPodSandbox for \"c321338cfb2372acf78155d9ded94bb1ddc869b911bd0c92277895715d21688f\"" Oct 9 07:17:18.097735 containerd[1457]: 2024-10-09 07:17:18.039 [WARNING][4704] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c321338cfb2372acf78155d9ded94bb1ddc869b911bd0c92277895715d21688f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--4--dcc5873578.novalocal-k8s-coredns--6f6b679f8f--dl7ww-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"08f94e6c-424e-4778-9aa7-62a9cbd840ab", ResourceVersion:"753", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 16, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-4-dcc5873578.novalocal", ContainerID:"2dca9087ee76c0afc90eba00bb617b51e1c7375e77c95feee826e4df6c5caae3", Pod:"coredns-6f6b679f8f-dl7ww", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.101.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie064d81dc2b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:17:18.097735 containerd[1457]: 2024-10-09 07:17:18.039 [INFO][4704] k8s.go 608: Cleaning up netns ContainerID="c321338cfb2372acf78155d9ded94bb1ddc869b911bd0c92277895715d21688f" Oct 9 07:17:18.097735 containerd[1457]: 2024-10-09 07:17:18.040 [INFO][4704] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="c321338cfb2372acf78155d9ded94bb1ddc869b911bd0c92277895715d21688f" iface="eth0" netns="" Oct 9 07:17:18.097735 containerd[1457]: 2024-10-09 07:17:18.040 [INFO][4704] k8s.go 615: Releasing IP address(es) ContainerID="c321338cfb2372acf78155d9ded94bb1ddc869b911bd0c92277895715d21688f" Oct 9 07:17:18.097735 containerd[1457]: 2024-10-09 07:17:18.040 [INFO][4704] utils.go 188: Calico CNI releasing IP address ContainerID="c321338cfb2372acf78155d9ded94bb1ddc869b911bd0c92277895715d21688f" Oct 9 07:17:18.097735 containerd[1457]: 2024-10-09 07:17:18.083 [INFO][4710] ipam_plugin.go 417: Releasing address using handleID ContainerID="c321338cfb2372acf78155d9ded94bb1ddc869b911bd0c92277895715d21688f" HandleID="k8s-pod-network.c321338cfb2372acf78155d9ded94bb1ddc869b911bd0c92277895715d21688f" Workload="ci--3975--2--2--4--dcc5873578.novalocal-k8s-coredns--6f6b679f8f--dl7ww-eth0" Oct 9 07:17:18.097735 containerd[1457]: 2024-10-09 07:17:18.083 [INFO][4710] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:17:18.097735 containerd[1457]: 2024-10-09 07:17:18.083 [INFO][4710] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:17:18.097735 containerd[1457]: 2024-10-09 07:17:18.092 [WARNING][4710] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="c321338cfb2372acf78155d9ded94bb1ddc869b911bd0c92277895715d21688f" HandleID="k8s-pod-network.c321338cfb2372acf78155d9ded94bb1ddc869b911bd0c92277895715d21688f" Workload="ci--3975--2--2--4--dcc5873578.novalocal-k8s-coredns--6f6b679f8f--dl7ww-eth0" Oct 9 07:17:18.097735 containerd[1457]: 2024-10-09 07:17:18.092 [INFO][4710] ipam_plugin.go 445: Releasing address using workloadID ContainerID="c321338cfb2372acf78155d9ded94bb1ddc869b911bd0c92277895715d21688f" HandleID="k8s-pod-network.c321338cfb2372acf78155d9ded94bb1ddc869b911bd0c92277895715d21688f" Workload="ci--3975--2--2--4--dcc5873578.novalocal-k8s-coredns--6f6b679f8f--dl7ww-eth0" Oct 9 07:17:18.097735 containerd[1457]: 2024-10-09 07:17:18.093 [INFO][4710] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:17:18.097735 containerd[1457]: 2024-10-09 07:17:18.095 [INFO][4704] k8s.go 621: Teardown processing complete. ContainerID="c321338cfb2372acf78155d9ded94bb1ddc869b911bd0c92277895715d21688f" Oct 9 07:17:18.099995 containerd[1457]: time="2024-10-09T07:17:18.098238224Z" level=info msg="TearDown network for sandbox \"c321338cfb2372acf78155d9ded94bb1ddc869b911bd0c92277895715d21688f\" successfully" Oct 9 07:17:18.099995 containerd[1457]: time="2024-10-09T07:17:18.098272719Z" level=info msg="StopPodSandbox for \"c321338cfb2372acf78155d9ded94bb1ddc869b911bd0c92277895715d21688f\" returns successfully" Oct 9 07:17:18.099995 containerd[1457]: time="2024-10-09T07:17:18.099870076Z" level=info msg="RemovePodSandbox for \"c321338cfb2372acf78155d9ded94bb1ddc869b911bd0c92277895715d21688f\"" Oct 9 07:17:18.099995 containerd[1457]: time="2024-10-09T07:17:18.099919380Z" level=info msg="Forcibly stopping sandbox \"c321338cfb2372acf78155d9ded94bb1ddc869b911bd0c92277895715d21688f\"" Oct 9 07:17:18.202072 containerd[1457]: 2024-10-09 07:17:18.159 [WARNING][4728] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c321338cfb2372acf78155d9ded94bb1ddc869b911bd0c92277895715d21688f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--4--dcc5873578.novalocal-k8s-coredns--6f6b679f8f--dl7ww-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"08f94e6c-424e-4778-9aa7-62a9cbd840ab", ResourceVersion:"753", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 16, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-4-dcc5873578.novalocal", ContainerID:"2dca9087ee76c0afc90eba00bb617b51e1c7375e77c95feee826e4df6c5caae3", Pod:"coredns-6f6b679f8f-dl7ww", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.101.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie064d81dc2b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:17:18.202072 containerd[1457]: 2024-10-09 07:17:18.160 [INFO][4728] k8s.go 608: Cleaning up netns ContainerID="c321338cfb2372acf78155d9ded94bb1ddc869b911bd0c92277895715d21688f" Oct 9 07:17:18.202072 containerd[1457]: 2024-10-09 07:17:18.160 [INFO][4728] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="c321338cfb2372acf78155d9ded94bb1ddc869b911bd0c92277895715d21688f" iface="eth0" netns="" Oct 9 07:17:18.202072 containerd[1457]: 2024-10-09 07:17:18.161 [INFO][4728] k8s.go 615: Releasing IP address(es) ContainerID="c321338cfb2372acf78155d9ded94bb1ddc869b911bd0c92277895715d21688f" Oct 9 07:17:18.202072 containerd[1457]: 2024-10-09 07:17:18.161 [INFO][4728] utils.go 188: Calico CNI releasing IP address ContainerID="c321338cfb2372acf78155d9ded94bb1ddc869b911bd0c92277895715d21688f" Oct 9 07:17:18.202072 containerd[1457]: 2024-10-09 07:17:18.187 [INFO][4735] ipam_plugin.go 417: Releasing address using handleID ContainerID="c321338cfb2372acf78155d9ded94bb1ddc869b911bd0c92277895715d21688f" HandleID="k8s-pod-network.c321338cfb2372acf78155d9ded94bb1ddc869b911bd0c92277895715d21688f" Workload="ci--3975--2--2--4--dcc5873578.novalocal-k8s-coredns--6f6b679f8f--dl7ww-eth0" Oct 9 07:17:18.202072 containerd[1457]: 2024-10-09 07:17:18.187 [INFO][4735] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:17:18.202072 containerd[1457]: 2024-10-09 07:17:18.187 [INFO][4735] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:17:18.202072 containerd[1457]: 2024-10-09 07:17:18.194 [WARNING][4735] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="c321338cfb2372acf78155d9ded94bb1ddc869b911bd0c92277895715d21688f" HandleID="k8s-pod-network.c321338cfb2372acf78155d9ded94bb1ddc869b911bd0c92277895715d21688f" Workload="ci--3975--2--2--4--dcc5873578.novalocal-k8s-coredns--6f6b679f8f--dl7ww-eth0" Oct 9 07:17:18.202072 containerd[1457]: 2024-10-09 07:17:18.194 [INFO][4735] ipam_plugin.go 445: Releasing address using workloadID ContainerID="c321338cfb2372acf78155d9ded94bb1ddc869b911bd0c92277895715d21688f" HandleID="k8s-pod-network.c321338cfb2372acf78155d9ded94bb1ddc869b911bd0c92277895715d21688f" Workload="ci--3975--2--2--4--dcc5873578.novalocal-k8s-coredns--6f6b679f8f--dl7ww-eth0" Oct 9 07:17:18.202072 containerd[1457]: 2024-10-09 07:17:18.197 [INFO][4735] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:17:18.202072 containerd[1457]: 2024-10-09 07:17:18.199 [INFO][4728] k8s.go 621: Teardown processing complete. ContainerID="c321338cfb2372acf78155d9ded94bb1ddc869b911bd0c92277895715d21688f" Oct 9 07:17:18.202733 containerd[1457]: time="2024-10-09T07:17:18.202094842Z" level=info msg="TearDown network for sandbox \"c321338cfb2372acf78155d9ded94bb1ddc869b911bd0c92277895715d21688f\" successfully" Oct 9 07:17:18.209031 containerd[1457]: time="2024-10-09T07:17:18.208947016Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c321338cfb2372acf78155d9ded94bb1ddc869b911bd0c92277895715d21688f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 07:17:18.209161 containerd[1457]: time="2024-10-09T07:17:18.209044421Z" level=info msg="RemovePodSandbox \"c321338cfb2372acf78155d9ded94bb1ddc869b911bd0c92277895715d21688f\" returns successfully" Oct 9 07:17:18.210046 containerd[1457]: time="2024-10-09T07:17:18.209623989Z" level=info msg="StopPodSandbox for \"40ec5ec3486a4649fc04440b65de0f26c8102ca6372e49d2fa0988c122aa271d\"" Oct 9 07:17:18.305618 containerd[1457]: 2024-10-09 07:17:18.261 [WARNING][4753] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="40ec5ec3486a4649fc04440b65de0f26c8102ca6372e49d2fa0988c122aa271d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--4--dcc5873578.novalocal-k8s-calico--kube--controllers--c4bcf989c--7nvgq-eth0", GenerateName:"calico-kube-controllers-c4bcf989c-", Namespace:"calico-system", SelfLink:"", UID:"d2215775-413f-4eb4-8f14-bbae43713b31", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 16, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c4bcf989c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-4-dcc5873578.novalocal", ContainerID:"407ec4cfe465c1ba02d142d61d08506d2b32fb609694a9e8e7fedb4fdc582e6b", Pod:"calico-kube-controllers-c4bcf989c-7nvgq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.101.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali76d5f15d083", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:17:18.305618 containerd[1457]: 2024-10-09 07:17:18.262 [INFO][4753] k8s.go 608: Cleaning up netns ContainerID="40ec5ec3486a4649fc04440b65de0f26c8102ca6372e49d2fa0988c122aa271d" Oct 9 07:17:18.305618 containerd[1457]: 2024-10-09 07:17:18.262 [INFO][4753] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="40ec5ec3486a4649fc04440b65de0f26c8102ca6372e49d2fa0988c122aa271d" iface="eth0" netns="" Oct 9 07:17:18.305618 containerd[1457]: 2024-10-09 07:17:18.262 [INFO][4753] k8s.go 615: Releasing IP address(es) ContainerID="40ec5ec3486a4649fc04440b65de0f26c8102ca6372e49d2fa0988c122aa271d" Oct 9 07:17:18.305618 containerd[1457]: 2024-10-09 07:17:18.262 [INFO][4753] utils.go 188: Calico CNI releasing IP address ContainerID="40ec5ec3486a4649fc04440b65de0f26c8102ca6372e49d2fa0988c122aa271d" Oct 9 07:17:18.305618 containerd[1457]: 2024-10-09 07:17:18.289 [INFO][4759] ipam_plugin.go 417: Releasing address using handleID ContainerID="40ec5ec3486a4649fc04440b65de0f26c8102ca6372e49d2fa0988c122aa271d" HandleID="k8s-pod-network.40ec5ec3486a4649fc04440b65de0f26c8102ca6372e49d2fa0988c122aa271d" Workload="ci--3975--2--2--4--dcc5873578.novalocal-k8s-calico--kube--controllers--c4bcf989c--7nvgq-eth0" Oct 9 07:17:18.305618 containerd[1457]: 2024-10-09 07:17:18.290 [INFO][4759] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:17:18.305618 containerd[1457]: 2024-10-09 07:17:18.290 [INFO][4759] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:17:18.305618 containerd[1457]: 2024-10-09 07:17:18.298 [WARNING][4759] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="40ec5ec3486a4649fc04440b65de0f26c8102ca6372e49d2fa0988c122aa271d" HandleID="k8s-pod-network.40ec5ec3486a4649fc04440b65de0f26c8102ca6372e49d2fa0988c122aa271d" Workload="ci--3975--2--2--4--dcc5873578.novalocal-k8s-calico--kube--controllers--c4bcf989c--7nvgq-eth0" Oct 9 07:17:18.305618 containerd[1457]: 2024-10-09 07:17:18.298 [INFO][4759] ipam_plugin.go 445: Releasing address using workloadID ContainerID="40ec5ec3486a4649fc04440b65de0f26c8102ca6372e49d2fa0988c122aa271d" HandleID="k8s-pod-network.40ec5ec3486a4649fc04440b65de0f26c8102ca6372e49d2fa0988c122aa271d" Workload="ci--3975--2--2--4--dcc5873578.novalocal-k8s-calico--kube--controllers--c4bcf989c--7nvgq-eth0" Oct 9 07:17:18.305618 containerd[1457]: 2024-10-09 07:17:18.300 [INFO][4759] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:17:18.305618 containerd[1457]: 2024-10-09 07:17:18.303 [INFO][4753] k8s.go 621: Teardown processing complete. ContainerID="40ec5ec3486a4649fc04440b65de0f26c8102ca6372e49d2fa0988c122aa271d" Oct 9 07:17:18.306590 containerd[1457]: time="2024-10-09T07:17:18.306364412Z" level=info msg="TearDown network for sandbox \"40ec5ec3486a4649fc04440b65de0f26c8102ca6372e49d2fa0988c122aa271d\" successfully" Oct 9 07:17:18.306590 containerd[1457]: time="2024-10-09T07:17:18.306396493Z" level=info msg="StopPodSandbox for \"40ec5ec3486a4649fc04440b65de0f26c8102ca6372e49d2fa0988c122aa271d\" returns successfully" Oct 9 07:17:18.307367 containerd[1457]: time="2024-10-09T07:17:18.307266772Z" level=info msg="RemovePodSandbox for \"40ec5ec3486a4649fc04440b65de0f26c8102ca6372e49d2fa0988c122aa271d\"" Oct 9 07:17:18.307811 containerd[1457]: time="2024-10-09T07:17:18.307495325Z" level=info msg="Forcibly stopping sandbox \"40ec5ec3486a4649fc04440b65de0f26c8102ca6372e49d2fa0988c122aa271d\"" Oct 9 07:17:18.403813 containerd[1457]: 2024-10-09 07:17:18.355 [WARNING][4777] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="40ec5ec3486a4649fc04440b65de0f26c8102ca6372e49d2fa0988c122aa271d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--4--dcc5873578.novalocal-k8s-calico--kube--controllers--c4bcf989c--7nvgq-eth0", GenerateName:"calico-kube-controllers-c4bcf989c-", Namespace:"calico-system", SelfLink:"", UID:"d2215775-413f-4eb4-8f14-bbae43713b31", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 16, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c4bcf989c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-4-dcc5873578.novalocal", ContainerID:"407ec4cfe465c1ba02d142d61d08506d2b32fb609694a9e8e7fedb4fdc582e6b", Pod:"calico-kube-controllers-c4bcf989c-7nvgq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.101.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali76d5f15d083", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:17:18.403813 containerd[1457]: 2024-10-09 07:17:18.355 [INFO][4777] k8s.go 608: Cleaning up netns ContainerID="40ec5ec3486a4649fc04440b65de0f26c8102ca6372e49d2fa0988c122aa271d" Oct 9 07:17:18.403813 containerd[1457]: 2024-10-09 07:17:18.355 [INFO][4777] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="40ec5ec3486a4649fc04440b65de0f26c8102ca6372e49d2fa0988c122aa271d" iface="eth0" netns="" Oct 9 07:17:18.403813 containerd[1457]: 2024-10-09 07:17:18.355 [INFO][4777] k8s.go 615: Releasing IP address(es) ContainerID="40ec5ec3486a4649fc04440b65de0f26c8102ca6372e49d2fa0988c122aa271d" Oct 9 07:17:18.403813 containerd[1457]: 2024-10-09 07:17:18.355 [INFO][4777] utils.go 188: Calico CNI releasing IP address ContainerID="40ec5ec3486a4649fc04440b65de0f26c8102ca6372e49d2fa0988c122aa271d" Oct 9 07:17:18.403813 containerd[1457]: 2024-10-09 07:17:18.385 [INFO][4783] ipam_plugin.go 417: Releasing address using handleID ContainerID="40ec5ec3486a4649fc04440b65de0f26c8102ca6372e49d2fa0988c122aa271d" HandleID="k8s-pod-network.40ec5ec3486a4649fc04440b65de0f26c8102ca6372e49d2fa0988c122aa271d" Workload="ci--3975--2--2--4--dcc5873578.novalocal-k8s-calico--kube--controllers--c4bcf989c--7nvgq-eth0" Oct 9 07:17:18.403813 containerd[1457]: 2024-10-09 07:17:18.385 [INFO][4783] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:17:18.403813 containerd[1457]: 2024-10-09 07:17:18.385 [INFO][4783] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:17:18.403813 containerd[1457]: 2024-10-09 07:17:18.393 [WARNING][4783] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="40ec5ec3486a4649fc04440b65de0f26c8102ca6372e49d2fa0988c122aa271d" HandleID="k8s-pod-network.40ec5ec3486a4649fc04440b65de0f26c8102ca6372e49d2fa0988c122aa271d" Workload="ci--3975--2--2--4--dcc5873578.novalocal-k8s-calico--kube--controllers--c4bcf989c--7nvgq-eth0" Oct 9 07:17:18.403813 containerd[1457]: 2024-10-09 07:17:18.393 [INFO][4783] ipam_plugin.go 445: Releasing address using workloadID ContainerID="40ec5ec3486a4649fc04440b65de0f26c8102ca6372e49d2fa0988c122aa271d" HandleID="k8s-pod-network.40ec5ec3486a4649fc04440b65de0f26c8102ca6372e49d2fa0988c122aa271d" Workload="ci--3975--2--2--4--dcc5873578.novalocal-k8s-calico--kube--controllers--c4bcf989c--7nvgq-eth0" Oct 9 07:17:18.403813 containerd[1457]: 2024-10-09 07:17:18.396 [INFO][4783] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:17:18.403813 containerd[1457]: 2024-10-09 07:17:18.397 [INFO][4777] k8s.go 621: Teardown processing complete. ContainerID="40ec5ec3486a4649fc04440b65de0f26c8102ca6372e49d2fa0988c122aa271d" Oct 9 07:17:18.405190 containerd[1457]: time="2024-10-09T07:17:18.405125545Z" level=info msg="TearDown network for sandbox \"40ec5ec3486a4649fc04440b65de0f26c8102ca6372e49d2fa0988c122aa271d\" successfully" Oct 9 07:17:18.409603 containerd[1457]: time="2024-10-09T07:17:18.409561881Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"40ec5ec3486a4649fc04440b65de0f26c8102ca6372e49d2fa0988c122aa271d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 07:17:18.409677 containerd[1457]: time="2024-10-09T07:17:18.409644717Z" level=info msg="RemovePodSandbox \"40ec5ec3486a4649fc04440b65de0f26c8102ca6372e49d2fa0988c122aa271d\" returns successfully" Oct 9 07:17:18.522185 systemd-networkd[1373]: calib1ae446f24e: Gained IPv6LL Oct 9 07:17:20.668097 containerd[1457]: time="2024-10-09T07:17:20.667977094Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:17:20.670879 containerd[1457]: time="2024-10-09T07:17:20.670803437Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=40419849" Oct 9 07:17:20.676699 containerd[1457]: time="2024-10-09T07:17:20.676651213Z" level=info msg="ImageCreate event name:\"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:17:20.679967 containerd[1457]: time="2024-10-09T07:17:20.679900286Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:17:20.680930 containerd[1457]: time="2024-10-09T07:17:20.680707595Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"41912266\" in 3.767251608s" Oct 9 07:17:20.680930 containerd[1457]: time="2024-10-09T07:17:20.680748623Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\"" Oct 9 07:17:20.684049 containerd[1457]: time="2024-10-09T07:17:20.682095274Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Oct 9 07:17:20.685462 containerd[1457]: time="2024-10-09T07:17:20.685342564Z" level=info msg="CreateContainer within sandbox \"24cb9c23ab53bcb96c3a2cef300cfab94b3e5f9219be872e6f2003f411310788\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 9 07:17:20.725625 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1755501380.mount: Deactivated successfully. Oct 9 07:17:20.728193 containerd[1457]: time="2024-10-09T07:17:20.728153641Z" level=info msg="CreateContainer within sandbox \"24cb9c23ab53bcb96c3a2cef300cfab94b3e5f9219be872e6f2003f411310788\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b86880f6876693a227825acaec585552a8ec0ff0708ad3a0f4207facea3e7e2a\"" Oct 9 07:17:20.729367 containerd[1457]: time="2024-10-09T07:17:20.729315831Z" level=info msg="StartContainer for \"b86880f6876693a227825acaec585552a8ec0ff0708ad3a0f4207facea3e7e2a\"" Oct 9 07:17:20.779188 systemd[1]: Started cri-containerd-b86880f6876693a227825acaec585552a8ec0ff0708ad3a0f4207facea3e7e2a.scope - libcontainer container b86880f6876693a227825acaec585552a8ec0ff0708ad3a0f4207facea3e7e2a. Oct 9 07:17:20.844515 containerd[1457]: time="2024-10-09T07:17:20.844452394Z" level=info msg="StartContainer for \"b86880f6876693a227825acaec585552a8ec0ff0708ad3a0f4207facea3e7e2a\" returns successfully" Oct 9 07:17:21.102835 containerd[1457]: time="2024-10-09T07:17:21.097772892Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:17:21.105323 containerd[1457]: time="2024-10-09T07:17:21.104814786Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=77" Oct 9 07:17:21.122151 containerd[1457]: time="2024-10-09T07:17:21.122079690Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"41912266\" in 439.588646ms" Oct 9 07:17:21.122528 containerd[1457]: time="2024-10-09T07:17:21.122486660Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\"" Oct 9 07:17:21.128434 containerd[1457]: time="2024-10-09T07:17:21.128230617Z" level=info msg="CreateContainer within sandbox \"4993516bb96051dc50916ae050d585675bc1db42747b47a3e9bc650bad96c0af\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 9 07:17:21.169280 containerd[1457]: time="2024-10-09T07:17:21.169163538Z" level=info msg="CreateContainer within sandbox \"4993516bb96051dc50916ae050d585675bc1db42747b47a3e9bc650bad96c0af\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"03bf06c09d8fc6c5aca2475366744e84824887c5c190e3d123cad86e8502ff4a\"" Oct 9 07:17:21.171960 containerd[1457]: time="2024-10-09T07:17:21.169929920Z" level=info msg="StartContainer for \"03bf06c09d8fc6c5aca2475366744e84824887c5c190e3d123cad86e8502ff4a\"" Oct 9 07:17:21.204869 systemd[1]: Started cri-containerd-03bf06c09d8fc6c5aca2475366744e84824887c5c190e3d123cad86e8502ff4a.scope - libcontainer container 03bf06c09d8fc6c5aca2475366744e84824887c5c190e3d123cad86e8502ff4a. Oct 9 07:17:21.299434 containerd[1457]: time="2024-10-09T07:17:21.299373738Z" level=info msg="StartContainer for \"03bf06c09d8fc6c5aca2475366744e84824887c5c190e3d123cad86e8502ff4a\" returns successfully" Oct 9 07:17:21.391852 kubelet[2567]: I1009 07:17:21.391745 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-74759c9dd8-dbsvj" podStartSLOduration=2.202711318 podStartE2EDuration="6.391726018s" podCreationTimestamp="2024-10-09 07:17:15 +0000 UTC" firstStartedPulling="2024-10-09 07:17:16.936457243 +0000 UTC m=+59.733022683" lastFinishedPulling="2024-10-09 07:17:21.125471933 +0000 UTC m=+63.922037383" observedRunningTime="2024-10-09 07:17:21.364250098 +0000 UTC m=+64.160815639" watchObservedRunningTime="2024-10-09 07:17:21.391726018 +0000 UTC m=+64.188291468" Oct 9 07:17:22.347078 kubelet[2567]: I1009 07:17:22.344986 2567 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 9 07:17:22.348537 kubelet[2567]: I1009 07:17:22.347689 2567 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 9 07:17:33.912402 systemd[1]: Started sshd@9-172.24.4.220:22-172.24.4.1:37430.service - OpenSSH per-connection server daemon (172.24.4.1:37430). Oct 9 07:17:35.448367 sshd[4924]: Accepted publickey for core from 172.24.4.1 port 37430 ssh2: RSA SHA256:iTqmmSA9RcIkWmF7myyFAqWL8kdaKMdVpBUk8UaNQPM Oct 9 07:17:35.455956 sshd[4924]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:17:35.468677 systemd-logind[1438]: New session 12 of user core. Oct 9 07:17:35.480804 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 9 07:17:36.514482 systemd[1]: run-containerd-runc-k8s.io-e26d3cb097854562d3e448bcca77161b54717202f0bd14286184bfa183370003-runc.AOhFgA.mount: Deactivated successfully. Oct 9 07:17:37.912436 sshd[4924]: pam_unix(sshd:session): session closed for user core Oct 9 07:17:37.923407 systemd[1]: sshd@9-172.24.4.220:22-172.24.4.1:37430.service: Deactivated successfully. Oct 9 07:17:37.931268 systemd[1]: session-12.scope: Deactivated successfully. Oct 9 07:17:37.934170 systemd-logind[1438]: Session 12 logged out. Waiting for processes to exit. Oct 9 07:17:37.938975 systemd-logind[1438]: Removed session 12. Oct 9 07:17:42.937635 systemd[1]: Started sshd@10-172.24.4.220:22-172.24.4.1:43182.service - OpenSSH per-connection server daemon (172.24.4.1:43182). Oct 9 07:17:44.317324 sshd[4967]: Accepted publickey for core from 172.24.4.1 port 43182 ssh2: RSA SHA256:iTqmmSA9RcIkWmF7myyFAqWL8kdaKMdVpBUk8UaNQPM Oct 9 07:17:44.320697 sshd[4967]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:17:44.333666 systemd-logind[1438]: New session 13 of user core. Oct 9 07:17:44.340361 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 9 07:17:45.735733 sshd[4967]: pam_unix(sshd:session): session closed for user core Oct 9 07:17:45.744426 systemd[1]: sshd@10-172.24.4.220:22-172.24.4.1:43182.service: Deactivated successfully. Oct 9 07:17:45.748007 systemd[1]: session-13.scope: Deactivated successfully. Oct 9 07:17:45.749875 systemd-logind[1438]: Session 13 logged out. Waiting for processes to exit. Oct 9 07:17:45.752762 systemd-logind[1438]: Removed session 13. Oct 9 07:17:50.762588 systemd[1]: Started sshd@11-172.24.4.220:22-172.24.4.1:49178.service - OpenSSH per-connection server daemon (172.24.4.1:49178). Oct 9 07:17:52.405305 sshd[5007]: Accepted publickey for core from 172.24.4.1 port 49178 ssh2: RSA SHA256:iTqmmSA9RcIkWmF7myyFAqWL8kdaKMdVpBUk8UaNQPM Oct 9 07:17:52.408606 sshd[5007]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:17:52.421833 systemd-logind[1438]: New session 14 of user core. Oct 9 07:17:52.428408 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 9 07:17:53.466068 sshd[5007]: pam_unix(sshd:session): session closed for user core Oct 9 07:17:53.481576 systemd[1]: sshd@11-172.24.4.220:22-172.24.4.1:49178.service: Deactivated successfully. Oct 9 07:17:53.487607 systemd[1]: session-14.scope: Deactivated successfully. Oct 9 07:17:53.489891 systemd-logind[1438]: Session 14 logged out. Waiting for processes to exit. Oct 9 07:17:53.499688 systemd[1]: Started sshd@12-172.24.4.220:22-172.24.4.1:49192.service - OpenSSH per-connection server daemon (172.24.4.1:49192). Oct 9 07:17:53.504127 systemd-logind[1438]: Removed session 14. Oct 9 07:17:54.842682 sshd[5021]: Accepted publickey for core from 172.24.4.1 port 49192 ssh2: RSA SHA256:iTqmmSA9RcIkWmF7myyFAqWL8kdaKMdVpBUk8UaNQPM Oct 9 07:17:54.849142 sshd[5021]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:17:54.864111 systemd-logind[1438]: New session 15 of user core. Oct 9 07:17:54.874388 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 9 07:17:55.623419 sshd[5021]: pam_unix(sshd:session): session closed for user core Oct 9 07:17:55.640071 systemd[1]: sshd@12-172.24.4.220:22-172.24.4.1:49192.service: Deactivated successfully. Oct 9 07:17:55.649499 systemd[1]: session-15.scope: Deactivated successfully. Oct 9 07:17:55.656243 systemd-logind[1438]: Session 15 logged out. Waiting for processes to exit. Oct 9 07:17:55.665518 systemd[1]: Started sshd@13-172.24.4.220:22-172.24.4.1:38372.service - OpenSSH per-connection server daemon (172.24.4.1:38372). Oct 9 07:17:55.675414 systemd-logind[1438]: Removed session 15. Oct 9 07:17:57.028721 sshd[5039]: Accepted publickey for core from 172.24.4.1 port 38372 ssh2: RSA SHA256:iTqmmSA9RcIkWmF7myyFAqWL8kdaKMdVpBUk8UaNQPM Oct 9 07:17:57.032166 sshd[5039]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:17:57.043329 systemd-logind[1438]: New session 16 of user core. Oct 9 07:17:57.056359 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 9 07:17:57.832399 sshd[5039]: pam_unix(sshd:session): session closed for user core Oct 9 07:17:57.839409 systemd[1]: sshd@13-172.24.4.220:22-172.24.4.1:38372.service: Deactivated successfully. Oct 9 07:17:57.845103 systemd[1]: session-16.scope: Deactivated successfully. Oct 9 07:17:57.849832 systemd-logind[1438]: Session 16 logged out. Waiting for processes to exit. Oct 9 07:17:57.852469 systemd-logind[1438]: Removed session 16. Oct 9 07:18:02.859265 systemd[1]: Started sshd@14-172.24.4.220:22-172.24.4.1:38380.service - OpenSSH per-connection server daemon (172.24.4.1:38380). Oct 9 07:18:03.155720 kubelet[2567]: I1009 07:18:03.155439 2567 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 9 07:18:03.265941 kubelet[2567]: I1009 07:18:03.263611 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-74759c9dd8-qc6jz" podStartSLOduration=44.494387488 podStartE2EDuration="48.263575216s" podCreationTimestamp="2024-10-09 07:17:15 +0000 UTC" firstStartedPulling="2024-10-09 07:17:16.91268244 +0000 UTC m=+59.709247880" lastFinishedPulling="2024-10-09 07:17:20.681870157 +0000 UTC m=+63.478435608" observedRunningTime="2024-10-09 07:17:21.391174323 +0000 UTC m=+64.187739763" watchObservedRunningTime="2024-10-09 07:18:03.263575216 +0000 UTC m=+106.060140706" Oct 9 07:18:04.071788 sshd[5055]: Accepted publickey for core from 172.24.4.1 port 38380 ssh2: RSA SHA256:iTqmmSA9RcIkWmF7myyFAqWL8kdaKMdVpBUk8UaNQPM Oct 9 07:18:04.082695 sshd[5055]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:18:04.099478 systemd-logind[1438]: New session 17 of user core. Oct 9 07:18:04.106383 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 9 07:18:04.919227 sshd[5055]: pam_unix(sshd:session): session closed for user core Oct 9 07:18:04.931439 systemd[1]: sshd@14-172.24.4.220:22-172.24.4.1:38380.service: Deactivated successfully. Oct 9 07:18:04.934275 systemd[1]: session-17.scope: Deactivated successfully. Oct 9 07:18:04.935954 systemd-logind[1438]: Session 17 logged out. Waiting for processes to exit. Oct 9 07:18:04.939653 systemd-logind[1438]: Removed session 17. Oct 9 07:18:09.233605 kubelet[2567]: I1009 07:18:09.232716 2567 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 9 07:18:09.950572 systemd[1]: Started sshd@15-172.24.4.220:22-172.24.4.1:53144.service - OpenSSH per-connection server daemon (172.24.4.1:53144). Oct 9 07:18:11.314283 sshd[5122]: Accepted publickey for core from 172.24.4.1 port 53144 ssh2: RSA SHA256:iTqmmSA9RcIkWmF7myyFAqWL8kdaKMdVpBUk8UaNQPM Oct 9 07:18:11.322561 sshd[5122]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:18:11.334487 systemd-logind[1438]: New session 18 of user core. Oct 9 07:18:11.341246 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 9 07:18:12.458556 sshd[5122]: pam_unix(sshd:session): session closed for user core Oct 9 07:18:12.464601 systemd[1]: sshd@15-172.24.4.220:22-172.24.4.1:53144.service: Deactivated successfully. Oct 9 07:18:12.468257 systemd[1]: session-18.scope: Deactivated successfully. Oct 9 07:18:12.471537 systemd-logind[1438]: Session 18 logged out. Waiting for processes to exit. Oct 9 07:18:12.475136 systemd-logind[1438]: Removed session 18. Oct 9 07:18:17.483671 systemd[1]: Started sshd@16-172.24.4.220:22-172.24.4.1:53796.service - OpenSSH per-connection server daemon (172.24.4.1:53796). Oct 9 07:18:18.865687 sshd[5142]: Accepted publickey for core from 172.24.4.1 port 53796 ssh2: RSA SHA256:iTqmmSA9RcIkWmF7myyFAqWL8kdaKMdVpBUk8UaNQPM Oct 9 07:18:18.869478 sshd[5142]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:18:18.880725 systemd-logind[1438]: New session 19 of user core. Oct 9 07:18:18.887242 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 9 07:18:19.792013 sshd[5142]: pam_unix(sshd:session): session closed for user core Oct 9 07:18:19.798833 systemd[1]: sshd@16-172.24.4.220:22-172.24.4.1:53796.service: Deactivated successfully. Oct 9 07:18:19.800891 systemd[1]: session-19.scope: Deactivated successfully. Oct 9 07:18:19.803219 systemd-logind[1438]: Session 19 logged out. Waiting for processes to exit. Oct 9 07:18:19.810420 systemd[1]: Started sshd@17-172.24.4.220:22-172.24.4.1:53804.service - OpenSSH per-connection server daemon (172.24.4.1:53804). Oct 9 07:18:19.815239 systemd-logind[1438]: Removed session 19. Oct 9 07:18:21.005087 sshd[5173]: Accepted publickey for core from 172.24.4.1 port 53804 ssh2: RSA SHA256:iTqmmSA9RcIkWmF7myyFAqWL8kdaKMdVpBUk8UaNQPM Oct 9 07:18:21.008578 sshd[5173]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:18:21.021487 systemd-logind[1438]: New session 20 of user core. Oct 9 07:18:21.033719 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 9 07:18:22.410427 sshd[5173]: pam_unix(sshd:session): session closed for user core Oct 9 07:18:22.422680 systemd[1]: sshd@17-172.24.4.220:22-172.24.4.1:53804.service: Deactivated successfully. Oct 9 07:18:22.428794 systemd[1]: session-20.scope: Deactivated successfully. Oct 9 07:18:22.430891 systemd-logind[1438]: Session 20 logged out. Waiting for processes to exit. Oct 9 07:18:22.441719 systemd[1]: Started sshd@18-172.24.4.220:22-172.24.4.1:53812.service - OpenSSH per-connection server daemon (172.24.4.1:53812). Oct 9 07:18:22.444603 systemd-logind[1438]: Removed session 20. Oct 9 07:18:23.582055 sshd[5184]: Accepted publickey for core from 172.24.4.1 port 53812 ssh2: RSA SHA256:iTqmmSA9RcIkWmF7myyFAqWL8kdaKMdVpBUk8UaNQPM Oct 9 07:18:23.583307 sshd[5184]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:18:23.592103 systemd-logind[1438]: New session 21 of user core. Oct 9 07:18:23.597623 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 9 07:18:27.357646 sshd[5184]: pam_unix(sshd:session): session closed for user core Oct 9 07:18:27.377677 systemd[1]: Started sshd@19-172.24.4.220:22-172.24.4.1:49434.service - OpenSSH per-connection server daemon (172.24.4.1:49434). Oct 9 07:18:27.386625 systemd[1]: sshd@18-172.24.4.220:22-172.24.4.1:53812.service: Deactivated successfully. Oct 9 07:18:27.392335 systemd[1]: session-21.scope: Deactivated successfully. Oct 9 07:18:27.400456 systemd-logind[1438]: Session 21 logged out. Waiting for processes to exit. Oct 9 07:18:27.409396 systemd-logind[1438]: Removed session 21. Oct 9 07:18:28.649703 sshd[5215]: Accepted publickey for core from 172.24.4.1 port 49434 ssh2: RSA SHA256:iTqmmSA9RcIkWmF7myyFAqWL8kdaKMdVpBUk8UaNQPM Oct 9 07:18:28.668184 sshd[5215]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:18:28.684146 systemd-logind[1438]: New session 22 of user core. Oct 9 07:18:28.692341 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 9 07:18:30.750817 sshd[5215]: pam_unix(sshd:session): session closed for user core Oct 9 07:18:30.757563 systemd[1]: sshd@19-172.24.4.220:22-172.24.4.1:49434.service: Deactivated successfully. Oct 9 07:18:30.760365 systemd[1]: session-22.scope: Deactivated successfully. Oct 9 07:18:30.763339 systemd-logind[1438]: Session 22 logged out. Waiting for processes to exit. Oct 9 07:18:30.770391 systemd[1]: Started sshd@20-172.24.4.220:22-172.24.4.1:49448.service - OpenSSH per-connection server daemon (172.24.4.1:49448). Oct 9 07:18:30.773708 systemd-logind[1438]: Removed session 22. Oct 9 07:18:32.091967 sshd[5229]: Accepted publickey for core from 172.24.4.1 port 49448 ssh2: RSA SHA256:iTqmmSA9RcIkWmF7myyFAqWL8kdaKMdVpBUk8UaNQPM Oct 9 07:18:32.097891 sshd[5229]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:18:32.111180 systemd-logind[1438]: New session 23 of user core. Oct 9 07:18:32.122883 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 9 07:18:32.991391 sshd[5229]: pam_unix(sshd:session): session closed for user core Oct 9 07:18:32.999633 systemd[1]: sshd@20-172.24.4.220:22-172.24.4.1:49448.service: Deactivated successfully. Oct 9 07:18:33.005781 systemd[1]: session-23.scope: Deactivated successfully. Oct 9 07:18:33.008108 systemd-logind[1438]: Session 23 logged out. Waiting for processes to exit. Oct 9 07:18:33.010266 systemd-logind[1438]: Removed session 23. Oct 9 07:18:37.990481 systemd[1]: Started sshd@21-172.24.4.220:22-172.24.4.1:35268.service - OpenSSH per-connection server daemon (172.24.4.1:35268). Oct 9 07:18:39.185940 sshd[5274]: Accepted publickey for core from 172.24.4.1 port 35268 ssh2: RSA SHA256:iTqmmSA9RcIkWmF7myyFAqWL8kdaKMdVpBUk8UaNQPM Oct 9 07:18:39.187158 sshd[5274]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:18:39.192645 systemd-logind[1438]: New session 24 of user core. Oct 9 07:18:39.198187 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 9 07:18:39.917280 sshd[5274]: pam_unix(sshd:session): session closed for user core Oct 9 07:18:39.927994 systemd[1]: sshd@21-172.24.4.220:22-172.24.4.1:35268.service: Deactivated successfully. Oct 9 07:18:39.930815 systemd[1]: session-24.scope: Deactivated successfully. Oct 9 07:18:39.933478 systemd-logind[1438]: Session 24 logged out. Waiting for processes to exit. Oct 9 07:18:39.935873 systemd-logind[1438]: Removed session 24. Oct 9 07:18:44.935698 systemd[1]: Started sshd@22-172.24.4.220:22-172.24.4.1:36948.service - OpenSSH per-connection server daemon (172.24.4.1:36948). Oct 9 07:18:46.320332 sshd[5297]: Accepted publickey for core from 172.24.4.1 port 36948 ssh2: RSA SHA256:iTqmmSA9RcIkWmF7myyFAqWL8kdaKMdVpBUk8UaNQPM Oct 9 07:18:46.322936 sshd[5297]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:18:46.339014 systemd-logind[1438]: New session 25 of user core. Oct 9 07:18:46.351509 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 9 07:18:47.078344 sshd[5297]: pam_unix(sshd:session): session closed for user core Oct 9 07:18:47.086304 systemd[1]: sshd@22-172.24.4.220:22-172.24.4.1:36948.service: Deactivated successfully. Oct 9 07:18:47.090304 systemd[1]: session-25.scope: Deactivated successfully. Oct 9 07:18:47.094116 systemd-logind[1438]: Session 25 logged out. Waiting for processes to exit. Oct 9 07:18:47.096951 systemd-logind[1438]: Removed session 25. Oct 9 07:18:52.103746 systemd[1]: Started sshd@23-172.24.4.220:22-172.24.4.1:36964.service - OpenSSH per-connection server daemon (172.24.4.1:36964). Oct 9 07:18:53.637407 sshd[5335]: Accepted publickey for core from 172.24.4.1 port 36964 ssh2: RSA SHA256:iTqmmSA9RcIkWmF7myyFAqWL8kdaKMdVpBUk8UaNQPM Oct 9 07:18:53.640890 sshd[5335]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:18:53.651996 systemd-logind[1438]: New session 26 of user core. Oct 9 07:18:53.660392 systemd[1]: Started session-26.scope - Session 26 of User core. Oct 9 07:18:54.327615 sshd[5335]: pam_unix(sshd:session): session closed for user core Oct 9 07:18:54.337784 systemd-logind[1438]: Session 26 logged out. Waiting for processes to exit. Oct 9 07:18:54.338572 systemd[1]: sshd@23-172.24.4.220:22-172.24.4.1:36964.service: Deactivated successfully. Oct 9 07:18:54.344063 systemd[1]: session-26.scope: Deactivated successfully. Oct 9 07:18:54.348615 systemd-logind[1438]: Removed session 26. Oct 9 07:18:59.348624 systemd[1]: Started sshd@24-172.24.4.220:22-172.24.4.1:36196.service - OpenSSH per-connection server daemon (172.24.4.1:36196). Oct 9 07:19:00.681291 sshd[5354]: Accepted publickey for core from 172.24.4.1 port 36196 ssh2: RSA SHA256:iTqmmSA9RcIkWmF7myyFAqWL8kdaKMdVpBUk8UaNQPM Oct 9 07:19:00.684629 sshd[5354]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:19:00.698182 systemd-logind[1438]: New session 27 of user core. Oct 9 07:19:00.703480 systemd[1]: Started session-27.scope - Session 27 of User core. Oct 9 07:19:01.458454 sshd[5354]: pam_unix(sshd:session): session closed for user core Oct 9 07:19:01.470965 systemd[1]: sshd@24-172.24.4.220:22-172.24.4.1:36196.service: Deactivated successfully. Oct 9 07:19:01.477730 systemd[1]: session-27.scope: Deactivated successfully. Oct 9 07:19:01.483110 systemd-logind[1438]: Session 27 logged out. Waiting for processes to exit. Oct 9 07:19:01.486828 systemd-logind[1438]: Removed session 27. Oct 9 07:19:06.482363 systemd[1]: Started sshd@25-172.24.4.220:22-172.24.4.1:42362.service - OpenSSH per-connection server daemon (172.24.4.1:42362). Oct 9 07:19:07.763888 sshd[5393]: Accepted publickey for core from 172.24.4.1 port 42362 ssh2: RSA SHA256:iTqmmSA9RcIkWmF7myyFAqWL8kdaKMdVpBUk8UaNQPM Oct 9 07:19:07.768352 sshd[5393]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:19:07.782167 systemd-logind[1438]: New session 28 of user core. Oct 9 07:19:07.790351 systemd[1]: Started session-28.scope - Session 28 of User core. Oct 9 07:19:09.062793 sshd[5393]: pam_unix(sshd:session): session closed for user core Oct 9 07:19:09.072134 systemd[1]: sshd@25-172.24.4.220:22-172.24.4.1:42362.service: Deactivated successfully. Oct 9 07:19:09.076978 systemd[1]: session-28.scope: Deactivated successfully. Oct 9 07:19:09.079990 systemd-logind[1438]: Session 28 logged out. Waiting for processes to exit. Oct 9 07:19:09.083209 systemd-logind[1438]: Removed session 28.