Sep 4 20:27:46.978028 kernel: Linux version 6.6.48-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Wed Sep 4 15:49:08 -00 2024 Sep 4 20:27:46.978176 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=6662bd39fec77da4c9a5c59d2cba257325976309ed96904c83697df1825085bf Sep 4 20:27:46.978199 kernel: BIOS-provided physical RAM map: Sep 4 20:27:46.978209 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 4 20:27:46.978218 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 4 20:27:46.978227 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 4 20:27:46.978239 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Sep 4 20:27:46.978249 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Sep 4 20:27:46.978258 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 4 20:27:46.978271 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 4 20:27:46.978283 kernel: NX (Execute Disable) protection: active Sep 4 20:27:46.978292 kernel: APIC: Static calls initialized Sep 4 20:27:46.978301 kernel: SMBIOS 2.8 present. Sep 4 20:27:46.978311 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Sep 4 20:27:46.978324 kernel: Hypervisor detected: KVM Sep 4 20:27:46.978338 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 4 20:27:46.978348 kernel: kvm-clock: using sched offset of 3412950827 cycles Sep 4 20:27:46.978368 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 4 20:27:46.978379 kernel: tsc: Detected 1999.997 MHz processor Sep 4 20:27:46.978390 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 4 20:27:46.978407 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 4 20:27:46.978418 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Sep 4 20:27:46.978429 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 4 20:27:46.978440 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 4 20:27:46.978454 kernel: ACPI: Early table checksum verification disabled Sep 4 20:27:46.978465 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Sep 4 20:27:46.978476 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 20:27:46.978486 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 20:27:46.978497 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 20:27:46.978507 kernel: ACPI: FACS 0x000000007FFE0000 000040 Sep 4 20:27:46.978518 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 20:27:46.978528 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 20:27:46.978538 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 20:27:46.978553 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 20:27:46.978563 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Sep 4 20:27:46.978574 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Sep 4 20:27:46.978584 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Sep 4 20:27:46.978594 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Sep 4 20:27:46.978605 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Sep 4 20:27:46.978616 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Sep 4 20:27:46.978636 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Sep 4 20:27:46.978647 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Sep 4 20:27:46.978658 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Sep 4 20:27:46.978670 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Sep 4 20:27:46.978681 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Sep 4 20:27:46.978693 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Sep 4 20:27:46.978704 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Sep 4 20:27:46.978719 kernel: Zone ranges: Sep 4 20:27:46.978731 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 4 20:27:46.978742 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Sep 4 20:27:46.978753 kernel: Normal empty Sep 4 20:27:46.978764 kernel: Movable zone start for each node Sep 4 20:27:46.978775 kernel: Early memory node ranges Sep 4 20:27:46.978786 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 4 20:27:46.978798 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Sep 4 20:27:46.978810 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Sep 4 20:27:46.978826 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 4 20:27:46.978837 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 4 20:27:46.978848 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Sep 4 20:27:46.978859 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 4 20:27:46.978870 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 4 20:27:46.978881 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 4 20:27:46.978893 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 4 20:27:46.978904 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 4 20:27:46.978915 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 4 20:27:46.978929 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 4 20:27:46.978940 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 4 20:27:46.978957 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 4 20:27:46.978968 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 4 20:27:46.978980 kernel: TSC deadline timer available Sep 4 20:27:46.978991 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Sep 4 20:27:46.979002 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 4 20:27:46.979013 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Sep 4 20:27:46.979023 kernel: Booting paravirtualized kernel on KVM Sep 4 20:27:46.979039 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 4 20:27:46.979054 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 4 20:27:46.982349 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576 Sep 4 20:27:46.982379 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152 Sep 4 20:27:46.982395 kernel: pcpu-alloc: [0] 0 1 Sep 4 20:27:46.982409 kernel: kvm-guest: PV spinlocks disabled, no host support Sep 4 20:27:46.982426 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=6662bd39fec77da4c9a5c59d2cba257325976309ed96904c83697df1825085bf Sep 4 20:27:46.982441 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 4 20:27:46.982467 kernel: random: crng init done Sep 4 20:27:46.982480 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 4 20:27:46.982493 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 4 20:27:46.982506 kernel: Fallback order for Node 0: 0 Sep 4 20:27:46.982520 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Sep 4 20:27:46.982532 kernel: Policy zone: DMA32 Sep 4 20:27:46.982544 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 4 20:27:46.982558 kernel: Memory: 1965060K/2096612K available (12288K kernel code, 2303K rwdata, 22640K rodata, 49336K init, 2008K bss, 131292K reserved, 0K cma-reserved) Sep 4 20:27:46.982584 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 4 20:27:46.982602 kernel: Kernel/User page tables isolation: enabled Sep 4 20:27:46.982615 kernel: ftrace: allocating 37670 entries in 148 pages Sep 4 20:27:46.982628 kernel: ftrace: allocated 148 pages with 3 groups Sep 4 20:27:46.982641 kernel: Dynamic Preempt: voluntary Sep 4 20:27:46.982662 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 4 20:27:46.982677 kernel: rcu: RCU event tracing is enabled. Sep 4 20:27:46.982690 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 4 20:27:46.982703 kernel: Trampoline variant of Tasks RCU enabled. Sep 4 20:27:46.982715 kernel: Rude variant of Tasks RCU enabled. Sep 4 20:27:46.982728 kernel: Tracing variant of Tasks RCU enabled. Sep 4 20:27:46.982745 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 4 20:27:46.982757 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 4 20:27:46.982770 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Sep 4 20:27:46.982781 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 4 20:27:46.982794 kernel: Console: colour VGA+ 80x25 Sep 4 20:27:46.982806 kernel: printk: console [tty0] enabled Sep 4 20:27:46.982819 kernel: printk: console [ttyS0] enabled Sep 4 20:27:46.982831 kernel: ACPI: Core revision 20230628 Sep 4 20:27:46.982845 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 4 20:27:46.982862 kernel: APIC: Switch to symmetric I/O mode setup Sep 4 20:27:46.982875 kernel: x2apic enabled Sep 4 20:27:46.982888 kernel: APIC: Switched APIC routing to: physical x2apic Sep 4 20:27:46.982914 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 4 20:27:46.982926 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a856ed927, max_idle_ns: 881590446804 ns Sep 4 20:27:46.982939 kernel: Calibrating delay loop (skipped) preset value.. 3999.99 BogoMIPS (lpj=1999997) Sep 4 20:27:46.982951 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Sep 4 20:27:46.982963 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Sep 4 20:27:46.982992 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 4 20:27:46.983006 kernel: Spectre V2 : Mitigation: Retpolines Sep 4 20:27:46.983019 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Sep 4 20:27:46.983036 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Sep 4 20:27:46.983049 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Sep 4 20:27:46.983084 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 4 20:27:46.983098 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 4 20:27:46.983111 kernel: MDS: Mitigation: Clear CPU buffers Sep 4 20:27:46.983124 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 4 20:27:46.983148 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 4 20:27:46.983161 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 4 20:27:46.983174 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 4 20:27:46.983188 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 4 20:27:46.983202 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Sep 4 20:27:46.983215 kernel: Freeing SMP alternatives memory: 32K Sep 4 20:27:46.983243 kernel: pid_max: default: 32768 minimum: 301 Sep 4 20:27:46.983265 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Sep 4 20:27:46.983284 kernel: SELinux: Initializing. Sep 4 20:27:46.983297 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 4 20:27:46.983310 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 4 20:27:46.983324 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Sep 4 20:27:46.983337 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Sep 4 20:27:46.983350 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Sep 4 20:27:46.983363 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Sep 4 20:27:46.983376 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Sep 4 20:27:46.983390 kernel: signal: max sigframe size: 1776 Sep 4 20:27:46.983408 kernel: rcu: Hierarchical SRCU implementation. Sep 4 20:27:46.983421 kernel: rcu: Max phase no-delay instances is 400. Sep 4 20:27:46.983445 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 4 20:27:46.983458 kernel: smp: Bringing up secondary CPUs ... Sep 4 20:27:46.983472 kernel: smpboot: x86: Booting SMP configuration: Sep 4 20:27:46.983485 kernel: .... node #0, CPUs: #1 Sep 4 20:27:46.983498 kernel: smp: Brought up 1 node, 2 CPUs Sep 4 20:27:46.983511 kernel: smpboot: Max logical packages: 1 Sep 4 20:27:46.983524 kernel: smpboot: Total of 2 processors activated (7999.98 BogoMIPS) Sep 4 20:27:46.983542 kernel: devtmpfs: initialized Sep 4 20:27:46.983554 kernel: x86/mm: Memory block size: 128MB Sep 4 20:27:46.983568 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 4 20:27:46.983582 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 4 20:27:46.983595 kernel: pinctrl core: initialized pinctrl subsystem Sep 4 20:27:46.983620 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 4 20:27:46.983634 kernel: audit: initializing netlink subsys (disabled) Sep 4 20:27:46.983654 kernel: audit: type=2000 audit(1725481665.046:1): state=initialized audit_enabled=0 res=1 Sep 4 20:27:46.983667 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 4 20:27:46.983684 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 4 20:27:46.983697 kernel: cpuidle: using governor menu Sep 4 20:27:46.983710 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 4 20:27:46.983723 kernel: dca service started, version 1.12.1 Sep 4 20:27:46.983736 kernel: PCI: Using configuration type 1 for base access Sep 4 20:27:46.983749 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 4 20:27:46.983762 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 4 20:27:46.983776 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 4 20:27:46.983789 kernel: ACPI: Added _OSI(Module Device) Sep 4 20:27:46.983807 kernel: ACPI: Added _OSI(Processor Device) Sep 4 20:27:46.983821 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Sep 4 20:27:46.983834 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 4 20:27:46.983848 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 4 20:27:46.983862 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 4 20:27:46.983875 kernel: ACPI: Interpreter enabled Sep 4 20:27:46.983888 kernel: ACPI: PM: (supports S0 S5) Sep 4 20:27:46.983901 kernel: ACPI: Using IOAPIC for interrupt routing Sep 4 20:27:46.983914 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 4 20:27:46.983931 kernel: PCI: Using E820 reservations for host bridge windows Sep 4 20:27:46.983945 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Sep 4 20:27:46.983958 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 4 20:27:46.985674 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Sep 4 20:27:46.985874 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Sep 4 20:27:46.986019 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Sep 4 20:27:46.986038 kernel: acpiphp: Slot [3] registered Sep 4 20:27:46.987585 kernel: acpiphp: Slot [4] registered Sep 4 20:27:46.987616 kernel: acpiphp: Slot [5] registered Sep 4 20:27:46.987630 kernel: acpiphp: Slot [6] registered Sep 4 20:27:46.987644 kernel: acpiphp: Slot [7] registered Sep 4 20:27:46.987657 kernel: acpiphp: Slot [8] registered Sep 4 20:27:46.987669 kernel: acpiphp: Slot [9] registered Sep 4 20:27:46.987682 kernel: acpiphp: Slot [10] registered Sep 4 20:27:46.987695 kernel: acpiphp: Slot [11] registered Sep 4 20:27:46.987708 kernel: acpiphp: Slot [12] registered Sep 4 20:27:46.987720 kernel: acpiphp: Slot [13] registered Sep 4 20:27:46.987743 kernel: acpiphp: Slot [14] registered Sep 4 20:27:46.987755 kernel: acpiphp: Slot [15] registered Sep 4 20:27:46.987767 kernel: acpiphp: Slot [16] registered Sep 4 20:27:46.987779 kernel: acpiphp: Slot [17] registered Sep 4 20:27:46.987791 kernel: acpiphp: Slot [18] registered Sep 4 20:27:46.987803 kernel: acpiphp: Slot [19] registered Sep 4 20:27:46.987815 kernel: acpiphp: Slot [20] registered Sep 4 20:27:46.987827 kernel: acpiphp: Slot [21] registered Sep 4 20:27:46.987839 kernel: acpiphp: Slot [22] registered Sep 4 20:27:46.987855 kernel: acpiphp: Slot [23] registered Sep 4 20:27:46.987867 kernel: acpiphp: Slot [24] registered Sep 4 20:27:46.987879 kernel: acpiphp: Slot [25] registered Sep 4 20:27:46.987891 kernel: acpiphp: Slot [26] registered Sep 4 20:27:46.987903 kernel: acpiphp: Slot [27] registered Sep 4 20:27:46.987916 kernel: acpiphp: Slot [28] registered Sep 4 20:27:46.987930 kernel: acpiphp: Slot [29] registered Sep 4 20:27:46.987943 kernel: acpiphp: Slot [30] registered Sep 4 20:27:46.987957 kernel: acpiphp: Slot [31] registered Sep 4 20:27:46.987975 kernel: PCI host bridge to bus 0000:00 Sep 4 20:27:46.988294 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 4 20:27:46.988432 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 4 20:27:46.988551 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 4 20:27:46.988673 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Sep 4 20:27:46.988795 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Sep 4 20:27:46.988948 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 4 20:27:46.991219 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Sep 4 20:27:46.991415 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Sep 4 20:27:46.991575 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Sep 4 20:27:46.991709 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Sep 4 20:27:46.991842 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Sep 4 20:27:46.991971 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Sep 4 20:27:46.992261 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Sep 4 20:27:46.992425 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Sep 4 20:27:46.992586 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Sep 4 20:27:46.992724 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Sep 4 20:27:46.992876 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Sep 4 20:27:46.993010 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Sep 4 20:27:46.993278 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Sep 4 20:27:46.993459 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Sep 4 20:27:46.993687 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Sep 4 20:27:46.993830 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Sep 4 20:27:46.993967 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Sep 4 20:27:46.994135 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Sep 4 20:27:46.994299 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 4 20:27:46.994474 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Sep 4 20:27:46.994615 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Sep 4 20:27:46.994744 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Sep 4 20:27:46.994875 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Sep 4 20:27:46.995059 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 4 20:27:46.995213 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Sep 4 20:27:46.995348 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Sep 4 20:27:46.995485 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Sep 4 20:27:46.995647 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Sep 4 20:27:46.995777 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Sep 4 20:27:46.995910 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Sep 4 20:27:46.996043 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Sep 4 20:27:46.996216 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Sep 4 20:27:46.996364 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Sep 4 20:27:46.996506 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Sep 4 20:27:46.996636 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Sep 4 20:27:46.996791 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Sep 4 20:27:46.996923 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Sep 4 20:27:46.997053 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Sep 4 20:27:46.999521 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Sep 4 20:27:46.999691 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Sep 4 20:27:46.999879 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Sep 4 20:27:47.000029 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Sep 4 20:27:47.000047 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 4 20:27:47.000078 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 4 20:27:47.000091 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 4 20:27:47.000104 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 4 20:27:47.000117 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Sep 4 20:27:47.000129 kernel: iommu: Default domain type: Translated Sep 4 20:27:47.000148 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 4 20:27:47.000160 kernel: PCI: Using ACPI for IRQ routing Sep 4 20:27:47.000173 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 4 20:27:47.000186 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 4 20:27:47.000199 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Sep 4 20:27:47.000349 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Sep 4 20:27:47.000485 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Sep 4 20:27:47.000628 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 4 20:27:47.000652 kernel: vgaarb: loaded Sep 4 20:27:47.000681 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 4 20:27:47.000694 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 4 20:27:47.000717 kernel: clocksource: Switched to clocksource kvm-clock Sep 4 20:27:47.000730 kernel: VFS: Disk quotas dquot_6.6.0 Sep 4 20:27:47.000744 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 4 20:27:47.000758 kernel: pnp: PnP ACPI init Sep 4 20:27:47.000772 kernel: pnp: PnP ACPI: found 4 devices Sep 4 20:27:47.000786 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 4 20:27:47.000805 kernel: NET: Registered PF_INET protocol family Sep 4 20:27:47.000819 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 4 20:27:47.000833 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 4 20:27:47.000846 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 4 20:27:47.000859 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 4 20:27:47.000872 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 4 20:27:47.000885 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 4 20:27:47.000898 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 4 20:27:47.000910 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 4 20:27:47.000928 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 4 20:27:47.000941 kernel: NET: Registered PF_XDP protocol family Sep 4 20:27:47.002392 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 4 20:27:47.002567 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 4 20:27:47.002690 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 4 20:27:47.002810 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Sep 4 20:27:47.002927 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Sep 4 20:27:47.003100 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Sep 4 20:27:47.003259 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 4 20:27:47.003279 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Sep 4 20:27:47.003447 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7b0 took 50623 usecs Sep 4 20:27:47.003466 kernel: PCI: CLS 0 bytes, default 64 Sep 4 20:27:47.003479 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 4 20:27:47.003492 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a856ed927, max_idle_ns: 881590446804 ns Sep 4 20:27:47.003506 kernel: Initialise system trusted keyrings Sep 4 20:27:47.003519 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 4 20:27:47.003538 kernel: Key type asymmetric registered Sep 4 20:27:47.003551 kernel: Asymmetric key parser 'x509' registered Sep 4 20:27:47.003565 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 4 20:27:47.003579 kernel: io scheduler mq-deadline registered Sep 4 20:27:47.003593 kernel: io scheduler kyber registered Sep 4 20:27:47.003608 kernel: io scheduler bfq registered Sep 4 20:27:47.003621 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 4 20:27:47.003635 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Sep 4 20:27:47.003649 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Sep 4 20:27:47.003664 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Sep 4 20:27:47.003681 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 4 20:27:47.003693 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 4 20:27:47.003706 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 4 20:27:47.003720 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 4 20:27:47.003732 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 4 20:27:47.003927 kernel: rtc_cmos 00:03: RTC can wake from S4 Sep 4 20:27:47.003951 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 4 20:27:47.004458 kernel: rtc_cmos 00:03: registered as rtc0 Sep 4 20:27:47.004622 kernel: rtc_cmos 00:03: setting system clock to 2024-09-04T20:27:46 UTC (1725481666) Sep 4 20:27:47.004754 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Sep 4 20:27:47.004772 kernel: intel_pstate: CPU model not supported Sep 4 20:27:47.004787 kernel: NET: Registered PF_INET6 protocol family Sep 4 20:27:47.004801 kernel: Segment Routing with IPv6 Sep 4 20:27:47.004814 kernel: In-situ OAM (IOAM) with IPv6 Sep 4 20:27:47.004828 kernel: NET: Registered PF_PACKET protocol family Sep 4 20:27:47.004842 kernel: Key type dns_resolver registered Sep 4 20:27:47.004862 kernel: IPI shorthand broadcast: enabled Sep 4 20:27:47.004875 kernel: sched_clock: Marking stable (1369004758, 151778712)->(1559836819, -39053349) Sep 4 20:27:47.004888 kernel: registered taskstats version 1 Sep 4 20:27:47.004901 kernel: Loading compiled-in X.509 certificates Sep 4 20:27:47.004914 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.48-flatcar: a53bb4e7e3319f75620f709d8a6c7aef0adb3b02' Sep 4 20:27:47.004928 kernel: Key type .fscrypt registered Sep 4 20:27:47.004942 kernel: Key type fscrypt-provisioning registered Sep 4 20:27:47.004957 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 4 20:27:47.004970 kernel: ima: Allocated hash algorithm: sha1 Sep 4 20:27:47.004988 kernel: ima: No architecture policies found Sep 4 20:27:47.005002 kernel: clk: Disabling unused clocks Sep 4 20:27:47.005015 kernel: Freeing unused kernel image (initmem) memory: 49336K Sep 4 20:27:47.005029 kernel: Write protecting the kernel read-only data: 36864k Sep 4 20:27:47.005043 kernel: Freeing unused kernel image (rodata/data gap) memory: 1936K Sep 4 20:27:47.005104 kernel: Run /init as init process Sep 4 20:27:47.005123 kernel: with arguments: Sep 4 20:27:47.005137 kernel: /init Sep 4 20:27:47.005151 kernel: with environment: Sep 4 20:27:47.005168 kernel: HOME=/ Sep 4 20:27:47.005182 kernel: TERM=linux Sep 4 20:27:47.005197 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 4 20:27:47.005215 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 4 20:27:47.005234 systemd[1]: Detected virtualization kvm. Sep 4 20:27:47.005253 systemd[1]: Detected architecture x86-64. Sep 4 20:27:47.005266 systemd[1]: Running in initrd. Sep 4 20:27:47.005279 systemd[1]: No hostname configured, using default hostname. Sep 4 20:27:47.005295 systemd[1]: Hostname set to . Sep 4 20:27:47.005309 systemd[1]: Initializing machine ID from VM UUID. Sep 4 20:27:47.005323 systemd[1]: Queued start job for default target initrd.target. Sep 4 20:27:47.005337 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 20:27:47.005351 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 20:27:47.005367 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 4 20:27:47.005383 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 20:27:47.005398 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 4 20:27:47.005417 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 4 20:27:47.005435 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 4 20:27:47.005450 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 4 20:27:47.005464 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 20:27:47.005479 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 20:27:47.005493 systemd[1]: Reached target paths.target - Path Units. Sep 4 20:27:47.005512 systemd[1]: Reached target slices.target - Slice Units. Sep 4 20:27:47.005526 systemd[1]: Reached target swap.target - Swaps. Sep 4 20:27:47.005542 systemd[1]: Reached target timers.target - Timer Units. Sep 4 20:27:47.005560 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 20:27:47.005595 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 20:27:47.005610 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 4 20:27:47.005629 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 4 20:27:47.005644 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 20:27:47.005659 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 20:27:47.005674 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 20:27:47.005688 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 20:27:47.005702 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 4 20:27:47.005716 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 20:27:47.005747 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 4 20:27:47.005766 systemd[1]: Starting systemd-fsck-usr.service... Sep 4 20:27:47.005781 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 20:27:47.005795 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 20:27:47.005810 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 20:27:47.005870 systemd-journald[184]: Collecting audit messages is disabled. Sep 4 20:27:47.005914 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 4 20:27:47.005929 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 20:27:47.005943 systemd[1]: Finished systemd-fsck-usr.service. Sep 4 20:27:47.005959 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 20:27:47.005980 systemd-journald[184]: Journal started Sep 4 20:27:47.006015 systemd-journald[184]: Runtime Journal (/run/log/journal/41058eb7ade0420db098fc5e2b6d1a25) is 4.9M, max 39.3M, 34.4M free. Sep 4 20:27:47.007407 systemd-modules-load[185]: Inserted module 'overlay' Sep 4 20:27:47.056621 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 20:27:47.056675 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 4 20:27:47.063038 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 20:27:47.064758 kernel: Bridge firewalling registered Sep 4 20:27:47.063776 systemd-modules-load[185]: Inserted module 'br_netfilter' Sep 4 20:27:47.066723 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 20:27:47.070106 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 20:27:47.082545 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 20:27:47.086383 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 20:27:47.090518 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 20:27:47.094097 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Sep 4 20:27:47.113390 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 20:27:47.118051 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 20:27:47.130182 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 20:27:47.142101 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 4 20:27:47.143217 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Sep 4 20:27:47.153557 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 20:27:47.158010 dracut-cmdline[217]: dracut-dracut-053 Sep 4 20:27:47.158010 dracut-cmdline[217]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=6662bd39fec77da4c9a5c59d2cba257325976309ed96904c83697df1825085bf Sep 4 20:27:47.206315 systemd-resolved[224]: Positive Trust Anchors: Sep 4 20:27:47.207410 systemd-resolved[224]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 20:27:47.207468 systemd-resolved[224]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Sep 4 20:27:47.215991 systemd-resolved[224]: Defaulting to hostname 'linux'. Sep 4 20:27:47.219266 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 20:27:47.221149 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 20:27:47.286149 kernel: SCSI subsystem initialized Sep 4 20:27:47.304102 kernel: Loading iSCSI transport class v2.0-870. Sep 4 20:27:47.327099 kernel: iscsi: registered transport (tcp) Sep 4 20:27:47.360135 kernel: iscsi: registered transport (qla4xxx) Sep 4 20:27:47.360230 kernel: QLogic iSCSI HBA Driver Sep 4 20:27:47.422005 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 4 20:27:47.428389 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 4 20:27:47.479117 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 4 20:27:47.481839 kernel: device-mapper: uevent: version 1.0.3 Sep 4 20:27:47.481922 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 4 20:27:47.544216 kernel: raid6: avx2x4 gen() 14564 MB/s Sep 4 20:27:47.562156 kernel: raid6: avx2x2 gen() 14811 MB/s Sep 4 20:27:47.580534 kernel: raid6: avx2x1 gen() 12958 MB/s Sep 4 20:27:47.580704 kernel: raid6: using algorithm avx2x2 gen() 14811 MB/s Sep 4 20:27:47.599351 kernel: raid6: .... xor() 14857 MB/s, rmw enabled Sep 4 20:27:47.599497 kernel: raid6: using avx2x2 recovery algorithm Sep 4 20:27:47.635121 kernel: xor: automatically using best checksumming function avx Sep 4 20:27:47.840158 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 4 20:27:47.855502 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 4 20:27:47.862390 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 20:27:47.892898 systemd-udevd[403]: Using default interface naming scheme 'v255'. Sep 4 20:27:47.898774 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 20:27:47.907500 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 4 20:27:47.935414 dracut-pre-trigger[408]: rd.md=0: removing MD RAID activation Sep 4 20:27:47.983000 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 20:27:47.991449 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 20:27:48.069506 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 20:27:48.077613 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 4 20:27:48.102377 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 4 20:27:48.106112 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 20:27:48.107713 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 20:27:48.110528 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 20:27:48.117543 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 4 20:27:48.147774 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 4 20:27:48.187112 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Sep 4 20:27:48.200129 kernel: scsi host0: Virtio SCSI HBA Sep 4 20:27:48.209103 kernel: cryptd: max_cpu_qlen set to 1000 Sep 4 20:27:48.213108 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Sep 4 20:27:48.234900 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 4 20:27:48.235013 kernel: GPT:9289727 != 125829119 Sep 4 20:27:48.235027 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 4 20:27:48.237783 kernel: GPT:9289727 != 125829119 Sep 4 20:27:48.237857 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 4 20:27:48.240447 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 20:27:48.246161 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 20:27:48.246387 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 20:27:48.249322 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 20:27:48.250440 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 20:27:48.255432 kernel: AVX2 version of gcm_enc/dec engaged. Sep 4 20:27:48.255473 kernel: AES CTR mode by8 optimization enabled Sep 4 20:27:48.250688 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 20:27:48.254492 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 20:27:48.266531 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 20:27:48.298100 kernel: libata version 3.00 loaded. Sep 4 20:27:48.304097 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Sep 4 20:27:48.308171 kernel: virtio_blk virtio5: [vdb] 968 512-byte logical blocks (496 kB/484 KiB) Sep 4 20:27:48.310447 kernel: ata_piix 0000:00:01.1: version 2.13 Sep 4 20:27:48.327935 kernel: scsi host1: ata_piix Sep 4 20:27:48.331123 kernel: scsi host2: ata_piix Sep 4 20:27:48.331513 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Sep 4 20:27:48.331538 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Sep 4 20:27:48.381116 kernel: ACPI: bus type USB registered Sep 4 20:27:48.384097 kernel: usbcore: registered new interface driver usbfs Sep 4 20:27:48.384177 kernel: usbcore: registered new interface driver hub Sep 4 20:27:48.384190 kernel: usbcore: registered new device driver usb Sep 4 20:27:48.396688 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 4 20:27:48.422646 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (461) Sep 4 20:27:48.422686 kernel: BTRFS: device fsid d110be6f-93a3-451a-b365-11b5d04e0602 devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (455) Sep 4 20:27:48.427717 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 20:27:48.440332 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 4 20:27:48.447700 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 4 20:27:48.454232 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 4 20:27:48.455119 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 4 20:27:48.466358 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 4 20:27:48.470333 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 20:27:48.478202 disk-uuid[530]: Primary Header is updated. Sep 4 20:27:48.478202 disk-uuid[530]: Secondary Entries is updated. Sep 4 20:27:48.478202 disk-uuid[530]: Secondary Header is updated. Sep 4 20:27:48.486263 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 20:27:48.497101 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 20:27:48.521972 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 20:27:48.603326 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Sep 4 20:27:48.603676 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Sep 4 20:27:48.604725 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Sep 4 20:27:48.607152 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Sep 4 20:27:48.608396 kernel: hub 1-0:1.0: USB hub found Sep 4 20:27:48.609434 kernel: hub 1-0:1.0: 2 ports detected Sep 4 20:27:49.514109 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 20:27:49.514865 disk-uuid[532]: The operation has completed successfully. Sep 4 20:27:49.568544 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 4 20:27:49.568735 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 4 20:27:49.578352 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 4 20:27:49.597025 sh[562]: Success Sep 4 20:27:49.619118 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 4 20:27:49.697035 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 4 20:27:49.705355 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 4 20:27:49.718049 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 4 20:27:49.735126 kernel: BTRFS info (device dm-0): first mount of filesystem d110be6f-93a3-451a-b365-11b5d04e0602 Sep 4 20:27:49.735240 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 4 20:27:49.737244 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 4 20:27:49.737377 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 4 20:27:49.738639 kernel: BTRFS info (device dm-0): using free space tree Sep 4 20:27:49.747929 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 4 20:27:49.749650 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 4 20:27:49.756417 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 4 20:27:49.760339 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 4 20:27:49.773097 kernel: BTRFS info (device vda6): first mount of filesystem 50e7422b-f0c7-4536-902a-3ab4c864240b Sep 4 20:27:49.775428 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 20:27:49.775529 kernel: BTRFS info (device vda6): using free space tree Sep 4 20:27:49.781095 kernel: BTRFS info (device vda6): auto enabling async discard Sep 4 20:27:49.795863 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 4 20:27:49.799890 kernel: BTRFS info (device vda6): last unmount of filesystem 50e7422b-f0c7-4536-902a-3ab4c864240b Sep 4 20:27:49.808035 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 4 20:27:49.817500 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 4 20:27:49.945002 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 20:27:49.957537 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 20:27:49.980808 ignition[652]: Ignition 2.18.0 Sep 4 20:27:49.980829 ignition[652]: Stage: fetch-offline Sep 4 20:27:49.980910 ignition[652]: no configs at "/usr/lib/ignition/base.d" Sep 4 20:27:49.980926 ignition[652]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 4 20:27:49.984199 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 20:27:49.981136 ignition[652]: parsed url from cmdline: "" Sep 4 20:27:49.981141 ignition[652]: no config URL provided Sep 4 20:27:49.981148 ignition[652]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 20:27:49.981160 ignition[652]: no config at "/usr/lib/ignition/user.ign" Sep 4 20:27:49.981169 ignition[652]: failed to fetch config: resource requires networking Sep 4 20:27:49.981466 ignition[652]: Ignition finished successfully Sep 4 20:27:49.995156 systemd-networkd[749]: lo: Link UP Sep 4 20:27:49.995170 systemd-networkd[749]: lo: Gained carrier Sep 4 20:27:49.997708 systemd-networkd[749]: Enumeration completed Sep 4 20:27:49.998236 systemd-networkd[749]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Sep 4 20:27:49.998241 systemd-networkd[749]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Sep 4 20:27:49.998403 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 20:27:49.999551 systemd-networkd[749]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 20:27:49.999555 systemd-networkd[749]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 20:27:49.999876 systemd[1]: Reached target network.target - Network. Sep 4 20:27:50.000619 systemd-networkd[749]: eth0: Link UP Sep 4 20:27:50.000626 systemd-networkd[749]: eth0: Gained carrier Sep 4 20:27:50.000643 systemd-networkd[749]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Sep 4 20:27:50.004188 systemd-networkd[749]: eth1: Link UP Sep 4 20:27:50.004193 systemd-networkd[749]: eth1: Gained carrier Sep 4 20:27:50.004208 systemd-networkd[749]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 20:27:50.008746 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 4 20:27:50.022892 systemd-networkd[749]: eth1: DHCPv4 address 10.124.0.2/20 acquired from 169.254.169.253 Sep 4 20:27:50.027290 systemd-networkd[749]: eth0: DHCPv4 address 143.198.146.52/20, gateway 143.198.144.1 acquired from 169.254.169.253 Sep 4 20:27:50.045994 ignition[754]: Ignition 2.18.0 Sep 4 20:27:50.046012 ignition[754]: Stage: fetch Sep 4 20:27:50.049358 ignition[754]: no configs at "/usr/lib/ignition/base.d" Sep 4 20:27:50.049383 ignition[754]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 4 20:27:50.049648 ignition[754]: parsed url from cmdline: "" Sep 4 20:27:50.049656 ignition[754]: no config URL provided Sep 4 20:27:50.049668 ignition[754]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 20:27:50.049685 ignition[754]: no config at "/usr/lib/ignition/user.ign" Sep 4 20:27:50.049721 ignition[754]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Sep 4 20:27:50.066239 ignition[754]: GET result: OK Sep 4 20:27:50.067348 ignition[754]: parsing config with SHA512: c3c5390504e432a571764f921b2d524886657fb9678604f29db390fd2e15771aa426c6ce41de00536fa22e2f82df7f0db9ce0834da89bf300aaf75af48b1d501 Sep 4 20:27:50.075088 unknown[754]: fetched base config from "system" Sep 4 20:27:50.075105 unknown[754]: fetched base config from "system" Sep 4 20:27:50.075665 ignition[754]: fetch: fetch complete Sep 4 20:27:50.075116 unknown[754]: fetched user config from "digitalocean" Sep 4 20:27:50.075672 ignition[754]: fetch: fetch passed Sep 4 20:27:50.078050 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 4 20:27:50.075742 ignition[754]: Ignition finished successfully Sep 4 20:27:50.085397 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 4 20:27:50.104395 ignition[762]: Ignition 2.18.0 Sep 4 20:27:50.104419 ignition[762]: Stage: kargs Sep 4 20:27:50.104729 ignition[762]: no configs at "/usr/lib/ignition/base.d" Sep 4 20:27:50.104749 ignition[762]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 4 20:27:50.107464 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 4 20:27:50.105930 ignition[762]: kargs: kargs passed Sep 4 20:27:50.106026 ignition[762]: Ignition finished successfully Sep 4 20:27:50.116496 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 4 20:27:50.136934 ignition[769]: Ignition 2.18.0 Sep 4 20:27:50.136948 ignition[769]: Stage: disks Sep 4 20:27:50.137199 ignition[769]: no configs at "/usr/lib/ignition/base.d" Sep 4 20:27:50.139789 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 4 20:27:50.137212 ignition[769]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 4 20:27:50.138398 ignition[769]: disks: disks passed Sep 4 20:27:50.141409 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 4 20:27:50.138466 ignition[769]: Ignition finished successfully Sep 4 20:27:50.142690 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 4 20:27:50.148446 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 20:27:50.149432 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 20:27:50.150634 systemd[1]: Reached target basic.target - Basic System. Sep 4 20:27:50.158482 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 4 20:27:50.187693 systemd-fsck[779]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 4 20:27:50.191541 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 4 20:27:50.200416 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 4 20:27:50.337247 kernel: EXT4-fs (vda9): mounted filesystem 84a5cefa-c3c7-47d7-9305-7e6877f73628 r/w with ordered data mode. Quota mode: none. Sep 4 20:27:50.338490 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 4 20:27:50.339946 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 4 20:27:50.346289 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 20:27:50.349229 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 4 20:27:50.359409 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Sep 4 20:27:50.362054 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (787) Sep 4 20:27:50.366129 kernel: BTRFS info (device vda6): first mount of filesystem 50e7422b-f0c7-4536-902a-3ab4c864240b Sep 4 20:27:50.368100 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 20:27:50.368175 kernel: BTRFS info (device vda6): using free space tree Sep 4 20:27:50.368466 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 4 20:27:50.372942 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 4 20:27:50.375346 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 20:27:50.379369 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 4 20:27:50.385501 kernel: BTRFS info (device vda6): auto enabling async discard Sep 4 20:27:50.393289 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 4 20:27:50.395644 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 20:27:50.462763 coreos-metadata[789]: Sep 04 20:27:50.462 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 4 20:27:50.475793 coreos-metadata[790]: Sep 04 20:27:50.475 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 4 20:27:50.487972 initrd-setup-root[817]: cut: /sysroot/etc/passwd: No such file or directory Sep 4 20:27:50.494601 initrd-setup-root[824]: cut: /sysroot/etc/group: No such file or directory Sep 4 20:27:50.498439 coreos-metadata[790]: Sep 04 20:27:50.498 INFO Fetch successful Sep 4 20:27:50.500986 coreos-metadata[789]: Sep 04 20:27:50.499 INFO Fetch successful Sep 4 20:27:50.508610 initrd-setup-root[831]: cut: /sysroot/etc/shadow: No such file or directory Sep 4 20:27:50.509462 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Sep 4 20:27:50.509648 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Sep 4 20:27:50.515792 coreos-metadata[790]: Sep 04 20:27:50.512 INFO wrote hostname ci-3975.2.1-5-b3ba9b7107 to /sysroot/etc/hostname Sep 4 20:27:50.518524 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 4 20:27:50.521487 initrd-setup-root[839]: cut: /sysroot/etc/gshadow: No such file or directory Sep 4 20:27:50.664499 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 4 20:27:50.671341 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 4 20:27:50.680684 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 4 20:27:50.694168 kernel: BTRFS info (device vda6): last unmount of filesystem 50e7422b-f0c7-4536-902a-3ab4c864240b Sep 4 20:27:50.715931 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 4 20:27:50.726423 ignition[908]: INFO : Ignition 2.18.0 Sep 4 20:27:50.726423 ignition[908]: INFO : Stage: mount Sep 4 20:27:50.728262 ignition[908]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 20:27:50.728262 ignition[908]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 4 20:27:50.730259 ignition[908]: INFO : mount: mount passed Sep 4 20:27:50.730259 ignition[908]: INFO : Ignition finished successfully Sep 4 20:27:50.729789 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 4 20:27:50.733007 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 4 20:27:50.739354 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 4 20:27:50.754480 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 20:27:50.775159 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (921) Sep 4 20:27:50.777535 kernel: BTRFS info (device vda6): first mount of filesystem 50e7422b-f0c7-4536-902a-3ab4c864240b Sep 4 20:27:50.777708 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 20:27:50.779541 kernel: BTRFS info (device vda6): using free space tree Sep 4 20:27:50.783137 kernel: BTRFS info (device vda6): auto enabling async discard Sep 4 20:27:50.786115 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 20:27:50.822101 ignition[938]: INFO : Ignition 2.18.0 Sep 4 20:27:50.822101 ignition[938]: INFO : Stage: files Sep 4 20:27:50.822101 ignition[938]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 20:27:50.822101 ignition[938]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 4 20:27:50.825103 ignition[938]: DEBUG : files: compiled without relabeling support, skipping Sep 4 20:27:50.826439 ignition[938]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 4 20:27:50.826439 ignition[938]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 4 20:27:50.830462 ignition[938]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 4 20:27:50.831522 ignition[938]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 4 20:27:50.831522 ignition[938]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 4 20:27:50.831189 unknown[938]: wrote ssh authorized keys file for user: core Sep 4 20:27:50.835128 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 4 20:27:50.835128 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 4 20:27:51.002884 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 4 20:27:51.057978 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 4 20:27:51.059387 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 4 20:27:51.059387 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 4 20:27:51.059387 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 4 20:27:51.059387 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 4 20:27:51.059387 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 20:27:51.059387 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 20:27:51.059387 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 20:27:51.059387 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 20:27:51.067841 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 20:27:51.067841 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 20:27:51.067841 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Sep 4 20:27:51.067841 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Sep 4 20:27:51.067841 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Sep 4 20:27:51.067841 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-x86-64.raw: attempt #1 Sep 4 20:27:51.244546 systemd-networkd[749]: eth0: Gained IPv6LL Sep 4 20:27:51.245122 systemd-networkd[749]: eth1: Gained IPv6LL Sep 4 20:27:51.331476 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 4 20:27:51.636793 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Sep 4 20:27:51.636793 ignition[938]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 4 20:27:51.640372 ignition[938]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 20:27:51.640372 ignition[938]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 20:27:51.640372 ignition[938]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 4 20:27:51.640372 ignition[938]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 4 20:27:51.640372 ignition[938]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 4 20:27:51.646006 ignition[938]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 4 20:27:51.646006 ignition[938]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 4 20:27:51.646006 ignition[938]: INFO : files: files passed Sep 4 20:27:51.646006 ignition[938]: INFO : Ignition finished successfully Sep 4 20:27:51.642641 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 4 20:27:51.649405 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 4 20:27:51.652756 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 4 20:27:51.661295 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 4 20:27:51.662450 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 4 20:27:51.672011 initrd-setup-root-after-ignition[967]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 20:27:51.672011 initrd-setup-root-after-ignition[967]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 4 20:27:51.674557 initrd-setup-root-after-ignition[971]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 20:27:51.675659 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 20:27:51.676996 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 4 20:27:51.684527 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 4 20:27:51.732195 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 4 20:27:51.732399 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 4 20:27:51.734798 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 4 20:27:51.735928 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 4 20:27:51.737528 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 4 20:27:51.748820 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 4 20:27:51.766852 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 20:27:51.773435 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 4 20:27:51.801296 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 4 20:27:51.802354 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 20:27:51.804267 systemd[1]: Stopped target timers.target - Timer Units. Sep 4 20:27:51.805814 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 4 20:27:51.806034 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 20:27:51.807919 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 4 20:27:51.808887 systemd[1]: Stopped target basic.target - Basic System. Sep 4 20:27:51.810418 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 4 20:27:51.811943 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 20:27:51.813290 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 4 20:27:51.814747 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 4 20:27:51.816092 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 20:27:51.817740 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 4 20:27:51.819112 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 4 20:27:51.820730 systemd[1]: Stopped target swap.target - Swaps. Sep 4 20:27:51.821879 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 4 20:27:51.822022 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 4 20:27:51.823605 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 4 20:27:51.824390 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 20:27:51.825505 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 4 20:27:51.825716 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 20:27:51.827044 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 4 20:27:51.827226 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 4 20:27:51.828906 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 4 20:27:51.829176 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 20:27:51.830665 systemd[1]: ignition-files.service: Deactivated successfully. Sep 4 20:27:51.830877 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 4 20:27:51.831910 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 4 20:27:51.832051 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 4 20:27:51.847050 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 4 20:27:51.850424 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 4 20:27:51.850986 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 4 20:27:51.851208 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 20:27:51.854352 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 4 20:27:51.854493 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 20:27:51.861394 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 4 20:27:51.861879 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 4 20:27:51.877559 ignition[991]: INFO : Ignition 2.18.0 Sep 4 20:27:51.877559 ignition[991]: INFO : Stage: umount Sep 4 20:27:51.877559 ignition[991]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 20:27:51.877559 ignition[991]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 4 20:27:51.880838 ignition[991]: INFO : umount: umount passed Sep 4 20:27:51.881713 ignition[991]: INFO : Ignition finished successfully Sep 4 20:27:51.883780 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 4 20:27:51.883889 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 4 20:27:51.887358 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 4 20:27:51.887517 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 4 20:27:51.888199 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 4 20:27:51.888244 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 4 20:27:51.889594 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 4 20:27:51.889663 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 4 20:27:51.892267 systemd[1]: Stopped target network.target - Network. Sep 4 20:27:51.900289 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 4 20:27:51.900401 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 20:27:51.901252 systemd[1]: Stopped target paths.target - Path Units. Sep 4 20:27:51.902792 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 4 20:27:51.906195 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 20:27:51.907574 systemd[1]: Stopped target slices.target - Slice Units. Sep 4 20:27:51.908835 systemd[1]: Stopped target sockets.target - Socket Units. Sep 4 20:27:51.910088 systemd[1]: iscsid.socket: Deactivated successfully. Sep 4 20:27:51.910146 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 20:27:51.911253 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 4 20:27:51.911316 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 20:27:51.912567 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 4 20:27:51.912647 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 4 20:27:51.913905 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 4 20:27:51.913978 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 4 20:27:51.915198 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 4 20:27:51.916549 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 4 20:27:51.919186 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 4 20:27:51.919953 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 4 20:27:51.920104 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 4 20:27:51.920193 systemd-networkd[749]: eth0: DHCPv6 lease lost Sep 4 20:27:51.922821 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 4 20:27:51.922986 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 4 20:27:51.924179 systemd-networkd[749]: eth1: DHCPv6 lease lost Sep 4 20:27:51.927729 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 4 20:27:51.927939 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 4 20:27:51.931730 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 4 20:27:51.932286 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 4 20:27:51.936004 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 4 20:27:51.936124 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 4 20:27:51.943426 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 4 20:27:51.944255 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 4 20:27:51.944385 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 20:27:51.945454 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 20:27:51.945658 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 20:27:51.948572 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 4 20:27:51.948666 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 4 20:27:51.950008 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 4 20:27:51.950555 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Sep 4 20:27:51.951698 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 20:27:51.968349 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 4 20:27:51.968516 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 4 20:27:51.976794 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 4 20:27:51.977037 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 20:27:51.979431 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 4 20:27:51.979543 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 4 20:27:51.981099 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 4 20:27:51.981165 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 20:27:51.982752 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 4 20:27:51.982846 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 4 20:27:51.984681 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 4 20:27:51.984749 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 4 20:27:51.986001 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 20:27:51.986154 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 20:27:51.994489 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 4 20:27:51.995508 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 4 20:27:51.995642 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 20:27:52.001195 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 4 20:27:52.001588 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 20:27:52.005302 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 4 20:27:52.005391 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 20:27:52.006599 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 20:27:52.006855 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 20:27:52.099228 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 4 20:27:52.101231 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 4 20:27:52.108210 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 4 20:27:52.137468 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 4 20:27:52.384116 systemd[1]: Switching root. Sep 4 20:27:52.474273 systemd-journald[184]: Journal stopped Sep 4 20:27:53.898929 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Sep 4 20:27:53.899036 kernel: SELinux: policy capability network_peer_controls=1 Sep 4 20:27:53.899165 kernel: SELinux: policy capability open_perms=1 Sep 4 20:27:53.899189 kernel: SELinux: policy capability extended_socket_class=1 Sep 4 20:27:53.899207 kernel: SELinux: policy capability always_check_network=0 Sep 4 20:27:53.899250 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 4 20:27:53.899273 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 4 20:27:53.899284 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 4 20:27:53.899300 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 4 20:27:53.899329 kernel: audit: type=1403 audit(1725481672.730:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 4 20:27:53.899347 systemd[1]: Successfully loaded SELinux policy in 53.668ms. Sep 4 20:27:53.899381 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 15.892ms. Sep 4 20:27:53.899404 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 4 20:27:53.899424 systemd[1]: Detected virtualization kvm. Sep 4 20:27:53.899450 systemd[1]: Detected architecture x86-64. Sep 4 20:27:53.899463 systemd[1]: Detected first boot. Sep 4 20:27:53.899483 systemd[1]: Hostname set to . Sep 4 20:27:53.899502 systemd[1]: Initializing machine ID from VM UUID. Sep 4 20:27:53.899514 zram_generator::config[1034]: No configuration found. Sep 4 20:27:53.899538 systemd[1]: Populated /etc with preset unit settings. Sep 4 20:27:53.899567 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 4 20:27:53.899597 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 4 20:27:53.899621 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 4 20:27:53.899634 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 4 20:27:53.899661 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 4 20:27:53.899681 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 4 20:27:53.899696 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 4 20:27:53.899714 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 4 20:27:53.899739 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 4 20:27:53.899759 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 4 20:27:53.899798 systemd[1]: Created slice user.slice - User and Session Slice. Sep 4 20:27:53.899811 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 20:27:53.899822 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 20:27:53.899834 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 4 20:27:53.899845 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 4 20:27:53.899857 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 4 20:27:53.899869 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 20:27:53.899884 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 4 20:27:53.899895 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 20:27:53.899915 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 4 20:27:53.899927 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 4 20:27:53.899943 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 4 20:27:53.899954 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 4 20:27:53.899965 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 20:27:53.899976 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 20:27:53.899994 systemd[1]: Reached target slices.target - Slice Units. Sep 4 20:27:53.900005 systemd[1]: Reached target swap.target - Swaps. Sep 4 20:27:53.900016 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 4 20:27:53.900031 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 4 20:27:53.900050 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 20:27:53.902152 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 20:27:53.902190 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 20:27:53.902202 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 4 20:27:53.902213 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 4 20:27:53.902224 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 4 20:27:53.902247 systemd[1]: Mounting media.mount - External Media Directory... Sep 4 20:27:53.902258 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 20:27:53.902270 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 4 20:27:53.902288 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 4 20:27:53.902338 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 4 20:27:53.902377 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 4 20:27:53.902398 systemd[1]: Reached target machines.target - Containers. Sep 4 20:27:53.902416 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 4 20:27:53.902447 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 20:27:53.902460 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 20:27:53.902472 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 4 20:27:53.902489 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 20:27:53.902509 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 20:27:53.902529 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 20:27:53.902545 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 4 20:27:53.902565 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 20:27:53.902587 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 4 20:27:53.902605 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 4 20:27:53.902631 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 4 20:27:53.902643 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 4 20:27:53.902663 systemd[1]: Stopped systemd-fsck-usr.service. Sep 4 20:27:53.902682 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 20:27:53.902694 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 20:27:53.902705 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 20:27:53.902727 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 4 20:27:53.902749 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 20:27:53.902767 systemd[1]: verity-setup.service: Deactivated successfully. Sep 4 20:27:53.902778 systemd[1]: Stopped verity-setup.service. Sep 4 20:27:53.902790 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 20:27:53.902801 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 4 20:27:53.902812 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 4 20:27:53.902823 systemd[1]: Mounted media.mount - External Media Directory. Sep 4 20:27:53.902834 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 4 20:27:53.902851 kernel: fuse: init (API version 7.39) Sep 4 20:27:53.902864 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 4 20:27:53.902879 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 4 20:27:53.902890 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 20:27:53.902909 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 4 20:27:53.902920 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 4 20:27:53.902931 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 20:27:53.902942 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 20:27:53.902953 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 20:27:53.902969 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 20:27:53.902980 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 4 20:27:53.902998 kernel: loop: module loaded Sep 4 20:27:53.903015 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 4 20:27:53.903026 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 20:27:53.903045 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 20:27:53.903056 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 20:27:53.910569 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 20:27:53.910667 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 20:27:53.910681 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 4 20:27:53.910744 systemd-journald[1113]: Collecting audit messages is disabled. Sep 4 20:27:53.910799 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 4 20:27:53.910820 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 20:27:53.910836 systemd-journald[1113]: Journal started Sep 4 20:27:53.910878 systemd-journald[1113]: Runtime Journal (/run/log/journal/41058eb7ade0420db098fc5e2b6d1a25) is 4.9M, max 39.3M, 34.4M free. Sep 4 20:27:53.441552 systemd[1]: Queued start job for default target multi-user.target. Sep 4 20:27:53.464793 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 4 20:27:53.465305 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 4 20:27:53.933089 kernel: ACPI: bus type drm_connector registered Sep 4 20:27:53.935131 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 20:27:53.949113 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 20:27:53.953105 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 20:27:53.960366 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 4 20:27:53.962827 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 20:27:53.963144 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 20:27:53.964964 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 4 20:27:53.967522 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 4 20:27:53.968544 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 4 20:27:53.979352 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 20:27:53.999007 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 4 20:27:54.000010 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 20:27:54.002894 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 4 20:27:54.010886 systemd-tmpfiles[1131]: ACLs are not supported, ignoring. Sep 4 20:27:54.010914 systemd-tmpfiles[1131]: ACLs are not supported, ignoring. Sep 4 20:27:54.011336 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 4 20:27:54.023464 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 4 20:27:54.024334 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 20:27:54.027284 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 4 20:27:54.030383 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 4 20:27:54.033221 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 20:27:54.041345 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 4 20:27:54.044340 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 4 20:27:54.049316 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 4 20:27:54.052021 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 20:27:54.053210 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 20:27:54.055795 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 4 20:27:54.078440 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 4 20:27:54.089206 systemd-journald[1113]: Time spent on flushing to /var/log/journal/41058eb7ade0420db098fc5e2b6d1a25 is 117.682ms for 994 entries. Sep 4 20:27:54.089206 systemd-journald[1113]: System Journal (/var/log/journal/41058eb7ade0420db098fc5e2b6d1a25) is 8.0M, max 195.6M, 187.6M free. Sep 4 20:27:54.245486 systemd-journald[1113]: Received client request to flush runtime journal. Sep 4 20:27:54.245579 kernel: loop0: detected capacity change from 0 to 80568 Sep 4 20:27:54.245605 kernel: block loop0: the capability attribute has been deprecated. Sep 4 20:27:54.245722 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 4 20:27:54.245747 kernel: loop1: detected capacity change from 0 to 139904 Sep 4 20:27:54.245777 kernel: loop2: detected capacity change from 0 to 8 Sep 4 20:27:54.123637 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 4 20:27:54.125621 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 4 20:27:54.140573 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 4 20:27:54.193749 udevadm[1161]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 4 20:27:54.241033 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 4 20:27:54.241903 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 4 20:27:54.253438 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 4 20:27:54.258696 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 4 20:27:54.269151 kernel: loop3: detected capacity change from 0 to 209816 Sep 4 20:27:54.272291 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 20:27:54.306113 kernel: loop4: detected capacity change from 0 to 80568 Sep 4 20:27:54.333668 kernel: loop5: detected capacity change from 0 to 139904 Sep 4 20:27:54.352894 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. Sep 4 20:27:54.352917 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. Sep 4 20:27:54.359198 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 20:27:54.369138 kernel: loop6: detected capacity change from 0 to 8 Sep 4 20:27:54.373172 kernel: loop7: detected capacity change from 0 to 209816 Sep 4 20:27:54.413881 (sd-merge)[1179]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Sep 4 20:27:54.414670 (sd-merge)[1179]: Merged extensions into '/usr'. Sep 4 20:27:54.426243 systemd[1]: Reloading requested from client PID 1160 ('systemd-sysext') (unit systemd-sysext.service)... Sep 4 20:27:54.426261 systemd[1]: Reloading... Sep 4 20:27:54.545115 zram_generator::config[1203]: No configuration found. Sep 4 20:27:54.857703 ldconfig[1156]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 4 20:27:54.936386 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 20:27:54.986745 systemd[1]: Reloading finished in 559 ms. Sep 4 20:27:55.012665 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 4 20:27:55.014053 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 4 20:27:55.026518 systemd[1]: Starting ensure-sysext.service... Sep 4 20:27:55.029978 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Sep 4 20:27:55.048303 systemd[1]: Reloading requested from client PID 1248 ('systemctl') (unit ensure-sysext.service)... Sep 4 20:27:55.048330 systemd[1]: Reloading... Sep 4 20:27:55.072383 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 4 20:27:55.073642 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 4 20:27:55.075035 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 4 20:27:55.075430 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. Sep 4 20:27:55.075545 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. Sep 4 20:27:55.079167 systemd-tmpfiles[1249]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 20:27:55.079180 systemd-tmpfiles[1249]: Skipping /boot Sep 4 20:27:55.095575 systemd-tmpfiles[1249]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 20:27:55.095786 systemd-tmpfiles[1249]: Skipping /boot Sep 4 20:27:55.152108 zram_generator::config[1274]: No configuration found. Sep 4 20:27:55.283224 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 20:27:55.344286 systemd[1]: Reloading finished in 295 ms. Sep 4 20:27:55.365832 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 4 20:27:55.371918 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Sep 4 20:27:55.387351 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 4 20:27:55.390269 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 4 20:27:55.398500 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 4 20:27:55.407374 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 20:27:55.419745 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 20:27:55.423267 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 4 20:27:55.432318 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 20:27:55.432531 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 20:27:55.440488 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 20:27:55.450492 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 20:27:55.454403 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 20:27:55.455269 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 20:27:55.458945 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 4 20:27:55.461170 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 20:27:55.467403 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 20:27:55.469038 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 20:27:55.469296 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 20:27:55.469379 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 20:27:55.481001 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 4 20:27:55.485966 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 20:27:55.486524 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 20:27:55.491478 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 20:27:55.493269 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 20:27:55.505380 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 4 20:27:55.506024 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 20:27:55.508152 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 4 20:27:55.509606 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 20:27:55.510236 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 20:27:55.512466 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 20:27:55.512620 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 20:27:55.518451 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 20:27:55.518861 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 20:27:55.520983 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 20:27:55.521168 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 20:27:55.538599 systemd[1]: Finished ensure-sysext.service. Sep 4 20:27:55.542727 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 20:27:55.542893 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 20:27:55.549329 systemd-udevd[1326]: Using default interface naming scheme 'v255'. Sep 4 20:27:55.553434 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 4 20:27:55.554458 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 4 20:27:55.555926 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 4 20:27:55.562216 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 4 20:27:55.591389 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 4 20:27:55.594223 augenrules[1356]: No rules Sep 4 20:27:55.597218 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 4 20:27:55.604166 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 20:27:55.614244 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 20:27:55.701427 systemd-networkd[1370]: lo: Link UP Sep 4 20:27:55.701853 systemd-networkd[1370]: lo: Gained carrier Sep 4 20:27:55.702718 systemd-networkd[1370]: Enumeration completed Sep 4 20:27:55.703168 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 20:27:55.713359 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 4 20:27:55.773737 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 4 20:27:55.775549 systemd[1]: Reached target time-set.target - System Time Set. Sep 4 20:27:55.781928 systemd-resolved[1323]: Positive Trust Anchors: Sep 4 20:27:55.781945 systemd-resolved[1323]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 20:27:55.781981 systemd-resolved[1323]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Sep 4 20:27:55.788174 systemd-resolved[1323]: Using system hostname 'ci-3975.2.1-5-b3ba9b7107'. Sep 4 20:27:55.790411 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 20:27:55.791610 systemd[1]: Reached target network.target - Network. Sep 4 20:27:55.792676 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 20:27:55.795345 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 4 20:27:55.816382 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Sep 4 20:27:55.816950 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 20:27:55.817143 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 20:27:55.821107 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1385) Sep 4 20:27:55.823290 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 20:27:55.829212 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 20:27:55.834287 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 20:27:55.834959 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 20:27:55.835000 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 4 20:27:55.835016 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 20:27:55.855479 systemd-networkd[1370]: eth1: Configuring with /run/systemd/network/10-46:5a:6b:5d:9a:8a.network. Sep 4 20:27:55.865424 systemd-networkd[1370]: eth1: Link UP Sep 4 20:27:55.865585 systemd-networkd[1370]: eth1: Gained carrier Sep 4 20:27:55.868280 kernel: ISO 9660 Extensions: RRIP_1991A Sep 4 20:27:55.870144 systemd-networkd[1370]: eth0: Configuring with /run/systemd/network/10-82:d6:7a:c4:56:e3.network. Sep 4 20:27:55.871324 systemd-networkd[1370]: eth0: Link UP Sep 4 20:27:55.871427 systemd-networkd[1370]: eth0: Gained carrier Sep 4 20:27:55.871688 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Sep 4 20:27:55.877220 systemd-timesyncd[1350]: Network configuration changed, trying to establish connection. Sep 4 20:27:55.894486 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 20:27:55.894669 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 20:27:55.907088 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1372) Sep 4 20:27:55.912753 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 20:27:55.913560 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 20:27:55.917393 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 20:27:55.917670 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 20:27:55.921746 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 20:27:55.921830 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 20:27:55.944100 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 4 20:27:55.951220 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Sep 4 20:27:55.953084 kernel: ACPI: button: Power Button [PWRF] Sep 4 20:27:55.958833 systemd-timesyncd[1350]: Contacted time server 5.161.184.148:123 (0.flatcar.pool.ntp.org). Sep 4 20:27:55.959214 systemd-timesyncd[1350]: Initial clock synchronization to Wed 2024-09-04 20:27:55.916413 UTC. Sep 4 20:27:55.979636 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 4 20:27:55.987957 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 4 20:27:55.992385 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 4 20:27:56.012427 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 4 20:27:56.048130 kernel: mousedev: PS/2 mouse device common for all mice Sep 4 20:27:56.070608 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 20:27:56.128108 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Sep 4 20:27:56.133717 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Sep 4 20:27:56.140097 kernel: Console: switching to colour dummy device 80x25 Sep 4 20:27:56.140214 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Sep 4 20:27:56.140282 kernel: [drm] features: -context_init Sep 4 20:27:56.144111 kernel: [drm] number of scanouts: 1 Sep 4 20:27:56.148330 kernel: [drm] number of cap sets: 0 Sep 4 20:27:56.148412 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Sep 4 20:27:56.166596 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Sep 4 20:27:56.166704 kernel: Console: switching to colour frame buffer device 128x48 Sep 4 20:27:56.180101 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Sep 4 20:27:56.184950 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 20:27:56.185264 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 20:27:56.193451 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 20:27:56.208047 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 20:27:56.208292 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 20:27:56.226272 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 20:27:56.309519 kernel: EDAC MC: Ver: 3.0.0 Sep 4 20:27:56.330338 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 20:27:56.337013 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 4 20:27:56.343389 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 4 20:27:56.372097 lvm[1430]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 20:27:56.405815 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 4 20:27:56.407120 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 20:27:56.407255 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 20:27:56.407502 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 4 20:27:56.407652 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 4 20:27:56.408080 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 4 20:27:56.408645 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 4 20:27:56.410210 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 4 20:27:56.410291 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 4 20:27:56.410319 systemd[1]: Reached target paths.target - Path Units. Sep 4 20:27:56.410371 systemd[1]: Reached target timers.target - Timer Units. Sep 4 20:27:56.411816 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 4 20:27:56.413783 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 4 20:27:56.424473 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 4 20:27:56.428537 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 4 20:27:56.432036 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 4 20:27:56.434156 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 20:27:56.434923 systemd[1]: Reached target basic.target - Basic System. Sep 4 20:27:56.437759 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 4 20:27:56.437809 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 4 20:27:56.444231 lvm[1434]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 20:27:56.444332 systemd[1]: Starting containerd.service - containerd container runtime... Sep 4 20:27:56.450992 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 4 20:27:56.461929 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 4 20:27:56.468295 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 4 20:27:56.475344 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 4 20:27:56.477570 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 4 20:27:56.485337 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 4 20:27:56.492405 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 4 20:27:56.506315 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 4 20:27:56.511268 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 4 20:27:56.513446 jq[1438]: false Sep 4 20:27:56.544170 extend-filesystems[1439]: Found loop4 Sep 4 20:27:56.544170 extend-filesystems[1439]: Found loop5 Sep 4 20:27:56.544170 extend-filesystems[1439]: Found loop6 Sep 4 20:27:56.544170 extend-filesystems[1439]: Found loop7 Sep 4 20:27:56.544170 extend-filesystems[1439]: Found vda Sep 4 20:27:56.544170 extend-filesystems[1439]: Found vda1 Sep 4 20:27:56.544170 extend-filesystems[1439]: Found vda2 Sep 4 20:27:56.544170 extend-filesystems[1439]: Found vda3 Sep 4 20:27:56.544170 extend-filesystems[1439]: Found usr Sep 4 20:27:56.544170 extend-filesystems[1439]: Found vda4 Sep 4 20:27:56.544170 extend-filesystems[1439]: Found vda6 Sep 4 20:27:56.544170 extend-filesystems[1439]: Found vda7 Sep 4 20:27:56.544170 extend-filesystems[1439]: Found vda9 Sep 4 20:27:56.544170 extend-filesystems[1439]: Checking size of /dev/vda9 Sep 4 20:27:56.685731 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1384) Sep 4 20:27:56.685819 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Sep 4 20:27:56.537342 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 4 20:27:56.687317 coreos-metadata[1436]: Sep 04 20:27:56.629 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 4 20:27:56.687317 coreos-metadata[1436]: Sep 04 20:27:56.655 INFO Fetch successful Sep 4 20:27:56.691456 extend-filesystems[1439]: Resized partition /dev/vda9 Sep 4 20:27:56.624022 dbus-daemon[1437]: [system] SELinux support is enabled Sep 4 20:27:56.542136 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 4 20:27:56.700913 extend-filesystems[1462]: resize2fs 1.47.0 (5-Feb-2023) Sep 4 20:27:56.542793 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 4 20:27:56.553294 systemd[1]: Starting update-engine.service - Update Engine... Sep 4 20:27:56.723212 update_engine[1451]: I0904 20:27:56.711511 1451 main.cc:92] Flatcar Update Engine starting Sep 4 20:27:56.561296 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 4 20:27:56.574195 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 4 20:27:56.620340 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 4 20:27:56.733774 jq[1453]: true Sep 4 20:27:56.620620 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 4 20:27:56.624675 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 4 20:27:56.739465 update_engine[1451]: I0904 20:27:56.737623 1451 update_check_scheduler.cc:74] Next update check in 8m42s Sep 4 20:27:56.626098 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 4 20:27:56.650121 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 4 20:27:56.659575 systemd[1]: motdgen.service: Deactivated successfully. Sep 4 20:27:56.660189 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 4 20:27:56.732809 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 4 20:27:56.732861 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 4 20:27:56.736012 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 4 20:27:56.736120 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Sep 4 20:27:56.736145 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 4 20:27:56.742057 systemd[1]: Started update-engine.service - Update Engine. Sep 4 20:27:56.767615 (ntainerd)[1468]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 4 20:27:56.786117 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Sep 4 20:27:56.789181 tar[1461]: linux-amd64/helm Sep 4 20:27:56.800349 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 4 20:27:56.805039 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 4 20:27:56.806406 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 4 20:27:56.816183 extend-filesystems[1462]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 4 20:27:56.816183 extend-filesystems[1462]: old_desc_blocks = 1, new_desc_blocks = 8 Sep 4 20:27:56.816183 extend-filesystems[1462]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Sep 4 20:27:56.840111 extend-filesystems[1439]: Resized filesystem in /dev/vda9 Sep 4 20:27:56.840111 extend-filesystems[1439]: Found vdb Sep 4 20:27:56.840928 jq[1467]: true Sep 4 20:27:56.818480 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 4 20:27:56.819047 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 4 20:27:56.897733 systemd-logind[1447]: New seat seat0. Sep 4 20:27:56.902623 systemd-logind[1447]: Watching system buttons on /dev/input/event1 (Power Button) Sep 4 20:27:56.902649 systemd-logind[1447]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 4 20:27:56.904195 systemd[1]: Started systemd-logind.service - User Login Management. Sep 4 20:27:56.930395 bash[1501]: Updated "/home/core/.ssh/authorized_keys" Sep 4 20:27:56.932524 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 4 20:27:56.946346 systemd[1]: Starting sshkeys.service... Sep 4 20:27:56.985388 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 4 20:27:56.999053 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 4 20:27:57.078372 sshd_keygen[1472]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 4 20:27:57.083834 coreos-metadata[1504]: Sep 04 20:27:57.081 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 4 20:27:57.132129 locksmithd[1482]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 4 20:27:57.136528 coreos-metadata[1504]: Sep 04 20:27:57.135 INFO Fetch successful Sep 4 20:27:57.159883 unknown[1504]: wrote ssh authorized keys file for user: core Sep 4 20:27:57.166157 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 4 20:27:57.189731 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 4 20:27:57.243705 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 4 20:27:57.245793 update-ssh-keys[1524]: Updated "/home/core/.ssh/authorized_keys" Sep 4 20:27:57.246488 systemd[1]: issuegen.service: Deactivated successfully. Sep 4 20:27:57.246748 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 4 20:27:57.253467 systemd[1]: Finished sshkeys.service. Sep 4 20:27:57.274231 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 4 20:27:57.338265 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 4 20:27:57.352833 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 4 20:27:57.359477 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 4 20:27:57.361754 systemd[1]: Reached target getty.target - Login Prompts. Sep 4 20:27:57.394473 containerd[1468]: time="2024-09-04T20:27:57.394231398Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Sep 4 20:27:57.453185 containerd[1468]: time="2024-09-04T20:27:57.452141530Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 4 20:27:57.453185 containerd[1468]: time="2024-09-04T20:27:57.452298798Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 4 20:27:57.458948 containerd[1468]: time="2024-09-04T20:27:57.456827828Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.48-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 4 20:27:57.458948 containerd[1468]: time="2024-09-04T20:27:57.456910952Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 4 20:27:57.462099 containerd[1468]: time="2024-09-04T20:27:57.459594818Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 20:27:57.462099 containerd[1468]: time="2024-09-04T20:27:57.459639998Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 4 20:27:57.462099 containerd[1468]: time="2024-09-04T20:27:57.459802658Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 4 20:27:57.462099 containerd[1468]: time="2024-09-04T20:27:57.459855893Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 20:27:57.462099 containerd[1468]: time="2024-09-04T20:27:57.459868929Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 4 20:27:57.462099 containerd[1468]: time="2024-09-04T20:27:57.459932358Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 4 20:27:57.463107 containerd[1468]: time="2024-09-04T20:27:57.462611059Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 4 20:27:57.463107 containerd[1468]: time="2024-09-04T20:27:57.462652600Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 4 20:27:57.463107 containerd[1468]: time="2024-09-04T20:27:57.462664391Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 4 20:27:57.463107 containerd[1468]: time="2024-09-04T20:27:57.462816608Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 20:27:57.463107 containerd[1468]: time="2024-09-04T20:27:57.462831125Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 4 20:27:57.463107 containerd[1468]: time="2024-09-04T20:27:57.462901530Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 4 20:27:57.463107 containerd[1468]: time="2024-09-04T20:27:57.462913085Z" level=info msg="metadata content store policy set" policy=shared Sep 4 20:27:57.470363 containerd[1468]: time="2024-09-04T20:27:57.470315331Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 4 20:27:57.470737 containerd[1468]: time="2024-09-04T20:27:57.470713539Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 4 20:27:57.470805 containerd[1468]: time="2024-09-04T20:27:57.470794448Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 4 20:27:57.470892 containerd[1468]: time="2024-09-04T20:27:57.470880103Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 4 20:27:57.471350 containerd[1468]: time="2024-09-04T20:27:57.471329285Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 4 20:27:57.471423 containerd[1468]: time="2024-09-04T20:27:57.471412766Z" level=info msg="NRI interface is disabled by configuration." Sep 4 20:27:57.471466 containerd[1468]: time="2024-09-04T20:27:57.471456247Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 4 20:27:57.471881 containerd[1468]: time="2024-09-04T20:27:57.471856331Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 4 20:27:57.472364 containerd[1468]: time="2024-09-04T20:27:57.472344314Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 4 20:27:57.472458 containerd[1468]: time="2024-09-04T20:27:57.472432440Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 4 20:27:57.472603 containerd[1468]: time="2024-09-04T20:27:57.472587470Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 4 20:27:57.474538 containerd[1468]: time="2024-09-04T20:27:57.472963459Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 4 20:27:57.474538 containerd[1468]: time="2024-09-04T20:27:57.472999679Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 4 20:27:57.474538 containerd[1468]: time="2024-09-04T20:27:57.473025242Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 4 20:27:57.474538 containerd[1468]: time="2024-09-04T20:27:57.473042937Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 4 20:27:57.474538 containerd[1468]: time="2024-09-04T20:27:57.473075552Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 4 20:27:57.474538 containerd[1468]: time="2024-09-04T20:27:57.473090798Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 4 20:27:57.474538 containerd[1468]: time="2024-09-04T20:27:57.473104325Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 4 20:27:57.474538 containerd[1468]: time="2024-09-04T20:27:57.473117288Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 4 20:27:57.474538 containerd[1468]: time="2024-09-04T20:27:57.473293147Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 4 20:27:57.474538 containerd[1468]: time="2024-09-04T20:27:57.473576818Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 4 20:27:57.474538 containerd[1468]: time="2024-09-04T20:27:57.473607599Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 4 20:27:57.474538 containerd[1468]: time="2024-09-04T20:27:57.473620869Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 4 20:27:57.474538 containerd[1468]: time="2024-09-04T20:27:57.473646657Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 4 20:27:57.474538 containerd[1468]: time="2024-09-04T20:27:57.473734552Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 4 20:27:57.475138 containerd[1468]: time="2024-09-04T20:27:57.473750892Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 4 20:27:57.475138 containerd[1468]: time="2024-09-04T20:27:57.473762743Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 4 20:27:57.475138 containerd[1468]: time="2024-09-04T20:27:57.473773909Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 4 20:27:57.475138 containerd[1468]: time="2024-09-04T20:27:57.473788568Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 4 20:27:57.475138 containerd[1468]: time="2024-09-04T20:27:57.473802095Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 4 20:27:57.475138 containerd[1468]: time="2024-09-04T20:27:57.473818463Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 4 20:27:57.475138 containerd[1468]: time="2024-09-04T20:27:57.473830872Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 4 20:27:57.475138 containerd[1468]: time="2024-09-04T20:27:57.473844305Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 4 20:27:57.475138 containerd[1468]: time="2024-09-04T20:27:57.474000096Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 4 20:27:57.475138 containerd[1468]: time="2024-09-04T20:27:57.474016074Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 4 20:27:57.475138 containerd[1468]: time="2024-09-04T20:27:57.474044602Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 4 20:27:57.478090 containerd[1468]: time="2024-09-04T20:27:57.476375337Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 4 20:27:57.478090 containerd[1468]: time="2024-09-04T20:27:57.476437474Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 4 20:27:57.478090 containerd[1468]: time="2024-09-04T20:27:57.476467324Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 4 20:27:57.478090 containerd[1468]: time="2024-09-04T20:27:57.476490060Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 4 20:27:57.478090 containerd[1468]: time="2024-09-04T20:27:57.476509781Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 4 20:27:57.478356 containerd[1468]: time="2024-09-04T20:27:57.476955334Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 4 20:27:57.478356 containerd[1468]: time="2024-09-04T20:27:57.477097544Z" level=info msg="Connect containerd service" Sep 4 20:27:57.478356 containerd[1468]: time="2024-09-04T20:27:57.477152635Z" level=info msg="using legacy CRI server" Sep 4 20:27:57.478356 containerd[1468]: time="2024-09-04T20:27:57.477163688Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 4 20:27:57.478356 containerd[1468]: time="2024-09-04T20:27:57.477304733Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 4 20:27:57.478977 containerd[1468]: time="2024-09-04T20:27:57.478938143Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 20:27:57.479213 containerd[1468]: time="2024-09-04T20:27:57.479188491Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 4 20:27:57.479334 containerd[1468]: time="2024-09-04T20:27:57.479311151Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 4 20:27:57.479411 containerd[1468]: time="2024-09-04T20:27:57.479394717Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 4 20:27:57.479483 containerd[1468]: time="2024-09-04T20:27:57.479466488Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 4 20:27:57.480211 containerd[1468]: time="2024-09-04T20:27:57.480154085Z" level=info msg="Start subscribing containerd event" Sep 4 20:27:57.481107 containerd[1468]: time="2024-09-04T20:27:57.481055848Z" level=info msg="Start recovering state" Sep 4 20:27:57.481348 containerd[1468]: time="2024-09-04T20:27:57.481329662Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 4 20:27:57.481546 containerd[1468]: time="2024-09-04T20:27:57.481413249Z" level=info msg="Start event monitor" Sep 4 20:27:57.481546 containerd[1468]: time="2024-09-04T20:27:57.481533090Z" level=info msg="Start snapshots syncer" Sep 4 20:27:57.481630 containerd[1468]: time="2024-09-04T20:27:57.481550632Z" level=info msg="Start cni network conf syncer for default" Sep 4 20:27:57.481630 containerd[1468]: time="2024-09-04T20:27:57.481563429Z" level=info msg="Start streaming server" Sep 4 20:27:57.481767 containerd[1468]: time="2024-09-04T20:27:57.481740790Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 4 20:27:57.481930 containerd[1468]: time="2024-09-04T20:27:57.481911274Z" level=info msg="containerd successfully booted in 0.090525s" Sep 4 20:27:57.483638 systemd[1]: Started containerd.service - containerd container runtime. Sep 4 20:27:57.485964 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 4 20:27:57.495398 systemd[1]: Started sshd@0-143.198.146.52:22-139.178.68.195:50808.service - OpenSSH per-connection server daemon (139.178.68.195:50808). Sep 4 20:27:57.516271 systemd-networkd[1370]: eth1: Gained IPv6LL Sep 4 20:27:57.521133 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 4 20:27:57.525941 systemd[1]: Reached target network-online.target - Network is Online. Sep 4 20:27:57.538424 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 20:27:57.549546 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 4 20:27:57.581010 systemd-networkd[1370]: eth0: Gained IPv6LL Sep 4 20:27:57.612076 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 4 20:27:57.622769 sshd[1541]: Accepted publickey for core from 139.178.68.195 port 50808 ssh2: RSA SHA256:6m86ErQYPfwi49NZRVftW/USO9k3FxgPtHd71f+HMpY Sep 4 20:27:57.625792 sshd[1541]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 20:27:57.644675 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 4 20:27:57.655479 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 4 20:27:57.667096 systemd-logind[1447]: New session 1 of user core. Sep 4 20:27:57.705310 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 4 20:27:57.720611 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 4 20:27:57.733761 (systemd)[1556]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 4 20:27:57.912172 systemd[1556]: Queued start job for default target default.target. Sep 4 20:27:57.919860 systemd[1556]: Created slice app.slice - User Application Slice. Sep 4 20:27:57.919897 systemd[1556]: Reached target paths.target - Paths. Sep 4 20:27:57.919912 systemd[1556]: Reached target timers.target - Timers. Sep 4 20:27:57.923489 systemd[1556]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 4 20:27:57.969287 systemd[1556]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 4 20:27:57.969456 systemd[1556]: Reached target sockets.target - Sockets. Sep 4 20:27:57.969478 systemd[1556]: Reached target basic.target - Basic System. Sep 4 20:27:57.969545 systemd[1556]: Reached target default.target - Main User Target. Sep 4 20:27:57.969587 systemd[1556]: Startup finished in 225ms. Sep 4 20:27:57.970257 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 4 20:27:57.979339 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 4 20:27:58.003923 tar[1461]: linux-amd64/LICENSE Sep 4 20:27:58.003923 tar[1461]: linux-amd64/README.md Sep 4 20:27:58.020112 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 4 20:27:58.070580 systemd[1]: Started sshd@1-143.198.146.52:22-139.178.68.195:50810.service - OpenSSH per-connection server daemon (139.178.68.195:50810). Sep 4 20:27:58.151095 sshd[1570]: Accepted publickey for core from 139.178.68.195 port 50810 ssh2: RSA SHA256:6m86ErQYPfwi49NZRVftW/USO9k3FxgPtHd71f+HMpY Sep 4 20:27:58.153374 sshd[1570]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 20:27:58.160622 systemd-logind[1447]: New session 2 of user core. Sep 4 20:27:58.169422 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 4 20:27:58.241414 sshd[1570]: pam_unix(sshd:session): session closed for user core Sep 4 20:27:58.252184 systemd[1]: sshd@1-143.198.146.52:22-139.178.68.195:50810.service: Deactivated successfully. Sep 4 20:27:58.256376 systemd[1]: session-2.scope: Deactivated successfully. Sep 4 20:27:58.258774 systemd-logind[1447]: Session 2 logged out. Waiting for processes to exit. Sep 4 20:27:58.266559 systemd[1]: Started sshd@2-143.198.146.52:22-139.178.68.195:50818.service - OpenSSH per-connection server daemon (139.178.68.195:50818). Sep 4 20:27:58.271116 systemd-logind[1447]: Removed session 2. Sep 4 20:27:58.336478 sshd[1577]: Accepted publickey for core from 139.178.68.195 port 50818 ssh2: RSA SHA256:6m86ErQYPfwi49NZRVftW/USO9k3FxgPtHd71f+HMpY Sep 4 20:27:58.339321 sshd[1577]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 20:27:58.345516 systemd-logind[1447]: New session 3 of user core. Sep 4 20:27:58.351358 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 4 20:27:58.421810 sshd[1577]: pam_unix(sshd:session): session closed for user core Sep 4 20:27:58.425268 systemd[1]: sshd@2-143.198.146.52:22-139.178.68.195:50818.service: Deactivated successfully. Sep 4 20:27:58.427833 systemd[1]: session-3.scope: Deactivated successfully. Sep 4 20:27:58.429865 systemd-logind[1447]: Session 3 logged out. Waiting for processes to exit. Sep 4 20:27:58.431186 systemd-logind[1447]: Removed session 3. Sep 4 20:27:58.796474 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 20:27:58.796847 (kubelet)[1588]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 20:27:58.799022 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 4 20:27:58.802997 systemd[1]: Startup finished in 1.527s (kernel) + 5.990s (initrd) + 6.125s (userspace) = 13.642s. Sep 4 20:27:59.616540 kubelet[1588]: E0904 20:27:59.616392 1588 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 20:27:59.619498 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 20:27:59.619660 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 20:27:59.620005 systemd[1]: kubelet.service: Consumed 1.516s CPU time. Sep 4 20:28:08.432553 systemd[1]: Started sshd@3-143.198.146.52:22-139.178.68.195:49678.service - OpenSSH per-connection server daemon (139.178.68.195:49678). Sep 4 20:28:08.479215 sshd[1601]: Accepted publickey for core from 139.178.68.195 port 49678 ssh2: RSA SHA256:6m86ErQYPfwi49NZRVftW/USO9k3FxgPtHd71f+HMpY Sep 4 20:28:08.480989 sshd[1601]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 20:28:08.487575 systemd-logind[1447]: New session 4 of user core. Sep 4 20:28:08.493445 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 4 20:28:08.559055 sshd[1601]: pam_unix(sshd:session): session closed for user core Sep 4 20:28:08.575684 systemd[1]: sshd@3-143.198.146.52:22-139.178.68.195:49678.service: Deactivated successfully. Sep 4 20:28:08.579816 systemd[1]: session-4.scope: Deactivated successfully. Sep 4 20:28:08.581555 systemd-logind[1447]: Session 4 logged out. Waiting for processes to exit. Sep 4 20:28:08.589717 systemd[1]: Started sshd@4-143.198.146.52:22-139.178.68.195:49692.service - OpenSSH per-connection server daemon (139.178.68.195:49692). Sep 4 20:28:08.591207 systemd-logind[1447]: Removed session 4. Sep 4 20:28:08.648199 sshd[1608]: Accepted publickey for core from 139.178.68.195 port 49692 ssh2: RSA SHA256:6m86ErQYPfwi49NZRVftW/USO9k3FxgPtHd71f+HMpY Sep 4 20:28:08.650764 sshd[1608]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 20:28:08.658178 systemd-logind[1447]: New session 5 of user core. Sep 4 20:28:08.666446 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 4 20:28:08.730412 sshd[1608]: pam_unix(sshd:session): session closed for user core Sep 4 20:28:08.740961 systemd[1]: sshd@4-143.198.146.52:22-139.178.68.195:49692.service: Deactivated successfully. Sep 4 20:28:08.743542 systemd[1]: session-5.scope: Deactivated successfully. Sep 4 20:28:08.746389 systemd-logind[1447]: Session 5 logged out. Waiting for processes to exit. Sep 4 20:28:08.752878 systemd[1]: Started sshd@5-143.198.146.52:22-139.178.68.195:49702.service - OpenSSH per-connection server daemon (139.178.68.195:49702). Sep 4 20:28:08.755376 systemd-logind[1447]: Removed session 5. Sep 4 20:28:08.806897 sshd[1615]: Accepted publickey for core from 139.178.68.195 port 49702 ssh2: RSA SHA256:6m86ErQYPfwi49NZRVftW/USO9k3FxgPtHd71f+HMpY Sep 4 20:28:08.809856 sshd[1615]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 20:28:08.818203 systemd-logind[1447]: New session 6 of user core. Sep 4 20:28:08.825463 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 4 20:28:08.895319 sshd[1615]: pam_unix(sshd:session): session closed for user core Sep 4 20:28:08.904630 systemd[1]: sshd@5-143.198.146.52:22-139.178.68.195:49702.service: Deactivated successfully. Sep 4 20:28:08.906789 systemd[1]: session-6.scope: Deactivated successfully. Sep 4 20:28:08.909428 systemd-logind[1447]: Session 6 logged out. Waiting for processes to exit. Sep 4 20:28:08.914524 systemd[1]: Started sshd@6-143.198.146.52:22-139.178.68.195:49712.service - OpenSSH per-connection server daemon (139.178.68.195:49712). Sep 4 20:28:08.916310 systemd-logind[1447]: Removed session 6. Sep 4 20:28:08.971596 sshd[1622]: Accepted publickey for core from 139.178.68.195 port 49712 ssh2: RSA SHA256:6m86ErQYPfwi49NZRVftW/USO9k3FxgPtHd71f+HMpY Sep 4 20:28:08.973892 sshd[1622]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 20:28:08.984948 systemd-logind[1447]: New session 7 of user core. Sep 4 20:28:08.990458 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 4 20:28:09.068618 sudo[1625]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 4 20:28:09.069045 sudo[1625]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 4 20:28:09.083163 sudo[1625]: pam_unix(sudo:session): session closed for user root Sep 4 20:28:09.087655 sshd[1622]: pam_unix(sshd:session): session closed for user core Sep 4 20:28:09.100743 systemd[1]: sshd@6-143.198.146.52:22-139.178.68.195:49712.service: Deactivated successfully. Sep 4 20:28:09.103358 systemd[1]: session-7.scope: Deactivated successfully. Sep 4 20:28:09.105324 systemd-logind[1447]: Session 7 logged out. Waiting for processes to exit. Sep 4 20:28:09.111560 systemd[1]: Started sshd@7-143.198.146.52:22-139.178.68.195:49718.service - OpenSSH per-connection server daemon (139.178.68.195:49718). Sep 4 20:28:09.113348 systemd-logind[1447]: Removed session 7. Sep 4 20:28:09.177219 sshd[1630]: Accepted publickey for core from 139.178.68.195 port 49718 ssh2: RSA SHA256:6m86ErQYPfwi49NZRVftW/USO9k3FxgPtHd71f+HMpY Sep 4 20:28:09.179361 sshd[1630]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 20:28:09.186547 systemd-logind[1447]: New session 8 of user core. Sep 4 20:28:09.196430 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 4 20:28:09.262280 sudo[1634]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 4 20:28:09.262764 sudo[1634]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 4 20:28:09.269150 sudo[1634]: pam_unix(sudo:session): session closed for user root Sep 4 20:28:09.278499 sudo[1633]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 4 20:28:09.278917 sudo[1633]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 4 20:28:09.298591 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 4 20:28:09.312738 auditctl[1637]: No rules Sep 4 20:28:09.313384 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 20:28:09.313662 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 4 20:28:09.320563 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 4 20:28:09.356564 augenrules[1655]: No rules Sep 4 20:28:09.358671 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 4 20:28:09.360401 sudo[1633]: pam_unix(sudo:session): session closed for user root Sep 4 20:28:09.365459 sshd[1630]: pam_unix(sshd:session): session closed for user core Sep 4 20:28:09.379507 systemd[1]: sshd@7-143.198.146.52:22-139.178.68.195:49718.service: Deactivated successfully. Sep 4 20:28:09.382203 systemd[1]: session-8.scope: Deactivated successfully. Sep 4 20:28:09.383458 systemd-logind[1447]: Session 8 logged out. Waiting for processes to exit. Sep 4 20:28:09.390649 systemd[1]: Started sshd@8-143.198.146.52:22-139.178.68.195:49722.service - OpenSSH per-connection server daemon (139.178.68.195:49722). Sep 4 20:28:09.393296 systemd-logind[1447]: Removed session 8. Sep 4 20:28:09.454132 sshd[1663]: Accepted publickey for core from 139.178.68.195 port 49722 ssh2: RSA SHA256:6m86ErQYPfwi49NZRVftW/USO9k3FxgPtHd71f+HMpY Sep 4 20:28:09.456406 sshd[1663]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 20:28:09.463498 systemd-logind[1447]: New session 9 of user core. Sep 4 20:28:09.474431 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 4 20:28:09.537683 sudo[1666]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 4 20:28:09.537968 sudo[1666]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 4 20:28:09.701728 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 4 20:28:09.714647 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 4 20:28:09.714817 (dockerd)[1676]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 4 20:28:09.721656 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 20:28:09.939475 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 20:28:09.943712 (kubelet)[1685]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 20:28:10.044913 kubelet[1685]: E0904 20:28:10.044842 1685 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 20:28:10.052994 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 20:28:10.053239 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 20:28:10.281622 dockerd[1676]: time="2024-09-04T20:28:10.281036760Z" level=info msg="Starting up" Sep 4 20:28:10.349339 dockerd[1676]: time="2024-09-04T20:28:10.348795556Z" level=info msg="Loading containers: start." Sep 4 20:28:10.517139 kernel: Initializing XFRM netlink socket Sep 4 20:28:10.648305 systemd-networkd[1370]: docker0: Link UP Sep 4 20:28:10.674860 dockerd[1676]: time="2024-09-04T20:28:10.674734697Z" level=info msg="Loading containers: done." Sep 4 20:28:10.785935 dockerd[1676]: time="2024-09-04T20:28:10.785863187Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 4 20:28:10.786589 dockerd[1676]: time="2024-09-04T20:28:10.786228101Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Sep 4 20:28:10.786589 dockerd[1676]: time="2024-09-04T20:28:10.786398168Z" level=info msg="Daemon has completed initialization" Sep 4 20:28:10.832377 dockerd[1676]: time="2024-09-04T20:28:10.832203468Z" level=info msg="API listen on /run/docker.sock" Sep 4 20:28:10.835607 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 4 20:28:12.015292 containerd[1468]: time="2024-09-04T20:28:12.015224177Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.13\"" Sep 4 20:28:12.672027 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3495471349.mount: Deactivated successfully. Sep 4 20:28:15.419134 containerd[1468]: time="2024-09-04T20:28:15.418196366Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.28.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:28:15.419679 containerd[1468]: time="2024-09-04T20:28:15.419425857Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.28.13: active requests=0, bytes read=34530735" Sep 4 20:28:15.420389 containerd[1468]: time="2024-09-04T20:28:15.420300276Z" level=info msg="ImageCreate event name:\"sha256:5447bb21fa283749e558782cbef636f1991732f1b8f345296a5204ccf0b5f7b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:28:15.424676 containerd[1468]: time="2024-09-04T20:28:15.424055533Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:7d2c9256ad576a0b3745b749efe7f4fa8b276ec7ef448fc0f45794ca78eb8625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:28:15.425695 containerd[1468]: time="2024-09-04T20:28:15.425643935Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.28.13\" with image id \"sha256:5447bb21fa283749e558782cbef636f1991732f1b8f345296a5204ccf0b5f7b7\", repo tag \"registry.k8s.io/kube-apiserver:v1.28.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:7d2c9256ad576a0b3745b749efe7f4fa8b276ec7ef448fc0f45794ca78eb8625\", size \"34527535\" in 3.410356945s" Sep 4 20:28:15.425872 containerd[1468]: time="2024-09-04T20:28:15.425842472Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.13\" returns image reference \"sha256:5447bb21fa283749e558782cbef636f1991732f1b8f345296a5204ccf0b5f7b7\"" Sep 4 20:28:15.457214 containerd[1468]: time="2024-09-04T20:28:15.457167352Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.13\"" Sep 4 20:28:18.114398 containerd[1468]: time="2024-09-04T20:28:18.114282974Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.28.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:28:18.115917 containerd[1468]: time="2024-09-04T20:28:18.115641073Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.28.13: active requests=0, bytes read=31849709" Sep 4 20:28:18.116892 containerd[1468]: time="2024-09-04T20:28:18.116836225Z" level=info msg="ImageCreate event name:\"sha256:f1a0a396058d414b391ade9dba6e95d7a71ee665b09fc0fc420126ac21c155a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:28:18.123604 containerd[1468]: time="2024-09-04T20:28:18.123493949Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e7b44c1741fe1802d159ffdbd0d1f78d48a4185d7fb1cdf8a112fbb50696f7e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:28:18.126513 containerd[1468]: time="2024-09-04T20:28:18.126438273Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.28.13\" with image id \"sha256:f1a0a396058d414b391ade9dba6e95d7a71ee665b09fc0fc420126ac21c155a5\", repo tag \"registry.k8s.io/kube-controller-manager:v1.28.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e7b44c1741fe1802d159ffdbd0d1f78d48a4185d7fb1cdf8a112fbb50696f7e1\", size \"33399655\" in 2.668911175s" Sep 4 20:28:18.126513 containerd[1468]: time="2024-09-04T20:28:18.126505414Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.13\" returns image reference \"sha256:f1a0a396058d414b391ade9dba6e95d7a71ee665b09fc0fc420126ac21c155a5\"" Sep 4 20:28:18.167487 containerd[1468]: time="2024-09-04T20:28:18.167024891Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.13\"" Sep 4 20:28:19.821172 containerd[1468]: time="2024-09-04T20:28:19.819859314Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.28.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:28:19.822466 containerd[1468]: time="2024-09-04T20:28:19.821994603Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.28.13: active requests=0, bytes read=17097777" Sep 4 20:28:19.823231 containerd[1468]: time="2024-09-04T20:28:19.823187272Z" level=info msg="ImageCreate event name:\"sha256:a60f64c0f37d085a5fcafef1b2a7adc9be95184dae7d8a5d1dbf6ca4681d328a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:28:19.826579 containerd[1468]: time="2024-09-04T20:28:19.826521477Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:efeb791718f4b9c62bd683f5b403da520f3651cb36ad9f800e0f98b595beafa4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:28:19.828120 containerd[1468]: time="2024-09-04T20:28:19.828003378Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.28.13\" with image id \"sha256:a60f64c0f37d085a5fcafef1b2a7adc9be95184dae7d8a5d1dbf6ca4681d328a\", repo tag \"registry.k8s.io/kube-scheduler:v1.28.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:efeb791718f4b9c62bd683f5b403da520f3651cb36ad9f800e0f98b595beafa4\", size \"18647741\" in 1.660814853s" Sep 4 20:28:19.828328 containerd[1468]: time="2024-09-04T20:28:19.828295578Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.13\" returns image reference \"sha256:a60f64c0f37d085a5fcafef1b2a7adc9be95184dae7d8a5d1dbf6ca4681d328a\"" Sep 4 20:28:19.857660 containerd[1468]: time="2024-09-04T20:28:19.857602712Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.13\"" Sep 4 20:28:20.303524 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 4 20:28:20.311551 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 20:28:20.483504 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 20:28:20.484421 (kubelet)[1913]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 20:28:20.624661 kubelet[1913]: E0904 20:28:20.624381 1913 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 20:28:20.630777 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 20:28:20.631351 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 20:28:21.235449 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3040838372.mount: Deactivated successfully. Sep 4 20:28:21.861316 containerd[1468]: time="2024-09-04T20:28:21.861226645Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.28.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:28:21.862490 containerd[1468]: time="2024-09-04T20:28:21.862411310Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.28.13: active requests=0, bytes read=28303449" Sep 4 20:28:21.863518 containerd[1468]: time="2024-09-04T20:28:21.863444942Z" level=info msg="ImageCreate event name:\"sha256:31fde28e72a31599555ab5aba850caa90b9254b760b1007bfb662d086bb672fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:28:21.866996 containerd[1468]: time="2024-09-04T20:28:21.866855303Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:537633f399f87ce85d44fc8471ece97a83632198f99b3f7e08770beca95e9fa1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:28:21.868031 containerd[1468]: time="2024-09-04T20:28:21.867845432Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.28.13\" with image id \"sha256:31fde28e72a31599555ab5aba850caa90b9254b760b1007bfb662d086bb672fc\", repo tag \"registry.k8s.io/kube-proxy:v1.28.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:537633f399f87ce85d44fc8471ece97a83632198f99b3f7e08770beca95e9fa1\", size \"28302468\" in 2.010180004s" Sep 4 20:28:21.868031 containerd[1468]: time="2024-09-04T20:28:21.867901605Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.13\" returns image reference \"sha256:31fde28e72a31599555ab5aba850caa90b9254b760b1007bfb662d086bb672fc\"" Sep 4 20:28:21.907660 containerd[1468]: time="2024-09-04T20:28:21.907588583Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Sep 4 20:28:22.434585 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3298338647.mount: Deactivated successfully. Sep 4 20:28:22.441137 containerd[1468]: time="2024-09-04T20:28:22.440791899Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:28:22.442092 containerd[1468]: time="2024-09-04T20:28:22.441846156Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Sep 4 20:28:22.442868 containerd[1468]: time="2024-09-04T20:28:22.442812603Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:28:22.445416 containerd[1468]: time="2024-09-04T20:28:22.445175640Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:28:22.446498 containerd[1468]: time="2024-09-04T20:28:22.446453692Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 538.805714ms" Sep 4 20:28:22.446707 containerd[1468]: time="2024-09-04T20:28:22.446678116Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Sep 4 20:28:22.472399 containerd[1468]: time="2024-09-04T20:28:22.472357292Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Sep 4 20:28:23.072450 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1000056416.mount: Deactivated successfully. Sep 4 20:28:25.470945 containerd[1468]: time="2024-09-04T20:28:25.470862578Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:28:25.472130 containerd[1468]: time="2024-09-04T20:28:25.472045524Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Sep 4 20:28:25.472856 containerd[1468]: time="2024-09-04T20:28:25.472466752Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:28:25.475676 containerd[1468]: time="2024-09-04T20:28:25.475624632Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:28:25.477864 containerd[1468]: time="2024-09-04T20:28:25.476758030Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 3.004360025s" Sep 4 20:28:25.477864 containerd[1468]: time="2024-09-04T20:28:25.476796091Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Sep 4 20:28:25.503287 containerd[1468]: time="2024-09-04T20:28:25.503238857Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Sep 4 20:28:26.109028 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2598071339.mount: Deactivated successfully. Sep 4 20:28:26.714185 containerd[1468]: time="2024-09-04T20:28:26.713174365Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:28:26.715302 containerd[1468]: time="2024-09-04T20:28:26.715204945Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.10.1: active requests=0, bytes read=16191749" Sep 4 20:28:26.716178 containerd[1468]: time="2024-09-04T20:28:26.716102537Z" level=info msg="ImageCreate event name:\"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:28:26.719138 containerd[1468]: time="2024-09-04T20:28:26.718353966Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:28:26.720252 containerd[1468]: time="2024-09-04T20:28:26.719627734Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.10.1\" with image id \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\", repo tag \"registry.k8s.io/coredns/coredns:v1.10.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\", size \"16190758\" in 1.216337704s" Sep 4 20:28:26.720252 containerd[1468]: time="2024-09-04T20:28:26.719687755Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" Sep 4 20:28:30.086212 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 20:28:30.102885 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 20:28:30.146499 systemd[1]: Reloading requested from client PID 2066 ('systemctl') (unit session-9.scope)... Sep 4 20:28:30.147929 systemd[1]: Reloading... Sep 4 20:28:30.340345 zram_generator::config[2100]: No configuration found. Sep 4 20:28:30.555384 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 20:28:30.687613 systemd[1]: Reloading finished in 538 ms. Sep 4 20:28:30.760523 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 4 20:28:30.760675 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 4 20:28:30.761325 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 20:28:30.767655 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 20:28:30.930845 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 20:28:30.938971 (kubelet)[2157]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 20:28:31.016107 kubelet[2157]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 20:28:31.017565 kubelet[2157]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 20:28:31.017565 kubelet[2157]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 20:28:31.017565 kubelet[2157]: I0904 20:28:31.016387 2157 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 20:28:31.663691 kubelet[2157]: I0904 20:28:31.663552 2157 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Sep 4 20:28:31.663691 kubelet[2157]: I0904 20:28:31.663637 2157 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 20:28:31.664115 kubelet[2157]: I0904 20:28:31.664093 2157 server.go:895] "Client rotation is on, will bootstrap in background" Sep 4 20:28:31.689054 kubelet[2157]: E0904 20:28:31.688543 2157 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://143.198.146.52:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 143.198.146.52:6443: connect: connection refused Sep 4 20:28:31.689054 kubelet[2157]: I0904 20:28:31.688619 2157 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 20:28:31.713660 kubelet[2157]: I0904 20:28:31.713583 2157 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 20:28:31.715898 kubelet[2157]: I0904 20:28:31.715824 2157 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 20:28:31.716257 kubelet[2157]: I0904 20:28:31.716212 2157 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Sep 4 20:28:31.716803 kubelet[2157]: I0904 20:28:31.716757 2157 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 20:28:31.716803 kubelet[2157]: I0904 20:28:31.716799 2157 container_manager_linux.go:301] "Creating device plugin manager" Sep 4 20:28:31.717759 kubelet[2157]: I0904 20:28:31.717702 2157 state_mem.go:36] "Initialized new in-memory state store" Sep 4 20:28:31.720777 kubelet[2157]: I0904 20:28:31.720317 2157 kubelet.go:393] "Attempting to sync node with API server" Sep 4 20:28:31.720777 kubelet[2157]: I0904 20:28:31.720389 2157 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 20:28:31.720777 kubelet[2157]: I0904 20:28:31.720462 2157 kubelet.go:309] "Adding apiserver pod source" Sep 4 20:28:31.720777 kubelet[2157]: I0904 20:28:31.720491 2157 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 20:28:31.721035 kubelet[2157]: W0904 20:28:31.720936 2157 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://143.198.146.52:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975.2.1-5-b3ba9b7107&limit=500&resourceVersion=0": dial tcp 143.198.146.52:6443: connect: connection refused Sep 4 20:28:31.721035 kubelet[2157]: E0904 20:28:31.721014 2157 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://143.198.146.52:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975.2.1-5-b3ba9b7107&limit=500&resourceVersion=0": dial tcp 143.198.146.52:6443: connect: connection refused Sep 4 20:28:31.723680 kubelet[2157]: W0904 20:28:31.723616 2157 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://143.198.146.52:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 143.198.146.52:6443: connect: connection refused Sep 4 20:28:31.724249 kubelet[2157]: E0904 20:28:31.724225 2157 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://143.198.146.52:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 143.198.146.52:6443: connect: connection refused Sep 4 20:28:31.724523 kubelet[2157]: I0904 20:28:31.724504 2157 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Sep 4 20:28:31.730496 kubelet[2157]: W0904 20:28:31.730461 2157 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 4 20:28:31.731865 kubelet[2157]: I0904 20:28:31.731625 2157 server.go:1232] "Started kubelet" Sep 4 20:28:31.734195 kubelet[2157]: I0904 20:28:31.733705 2157 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 20:28:31.735629 kubelet[2157]: I0904 20:28:31.735272 2157 server.go:462] "Adding debug handlers to kubelet server" Sep 4 20:28:31.736203 kubelet[2157]: I0904 20:28:31.736179 2157 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Sep 4 20:28:31.737091 kubelet[2157]: I0904 20:28:31.736670 2157 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 20:28:31.737279 kubelet[2157]: E0904 20:28:31.737014 2157 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3975.2.1-5-b3ba9b7107.17f224798e31c08b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3975.2.1-5-b3ba9b7107", UID:"ci-3975.2.1-5-b3ba9b7107", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3975.2.1-5-b3ba9b7107"}, FirstTimestamp:time.Date(2024, time.September, 4, 20, 28, 31, 731589259, time.Local), LastTimestamp:time.Date(2024, time.September, 4, 20, 28, 31, 731589259, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ci-3975.2.1-5-b3ba9b7107"}': 'Post "https://143.198.146.52:6443/api/v1/namespaces/default/events": dial tcp 143.198.146.52:6443: connect: connection refused'(may retry after sleeping) Sep 4 20:28:31.738245 kubelet[2157]: E0904 20:28:31.738203 2157 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Sep 4 20:28:31.738245 kubelet[2157]: E0904 20:28:31.738247 2157 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 20:28:31.742123 kubelet[2157]: I0904 20:28:31.740409 2157 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 20:28:31.747128 kubelet[2157]: E0904 20:28:31.746333 2157 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3975.2.1-5-b3ba9b7107\" not found" Sep 4 20:28:31.747128 kubelet[2157]: I0904 20:28:31.746405 2157 volume_manager.go:291] "Starting Kubelet Volume Manager" Sep 4 20:28:31.747128 kubelet[2157]: I0904 20:28:31.746523 2157 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Sep 4 20:28:31.747128 kubelet[2157]: I0904 20:28:31.746635 2157 reconciler_new.go:29] "Reconciler: start to sync state" Sep 4 20:28:31.747605 kubelet[2157]: W0904 20:28:31.747549 2157 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://143.198.146.52:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 143.198.146.52:6443: connect: connection refused Sep 4 20:28:31.747707 kubelet[2157]: E0904 20:28:31.747696 2157 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://143.198.146.52:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 143.198.146.52:6443: connect: connection refused Sep 4 20:28:31.750044 kubelet[2157]: E0904 20:28:31.750009 2157 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.146.52:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.2.1-5-b3ba9b7107?timeout=10s\": dial tcp 143.198.146.52:6443: connect: connection refused" interval="200ms" Sep 4 20:28:31.798173 kubelet[2157]: I0904 20:28:31.798002 2157 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 20:28:31.800095 kubelet[2157]: I0904 20:28:31.800046 2157 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 20:28:31.802005 kubelet[2157]: I0904 20:28:31.801965 2157 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 20:28:31.802556 kubelet[2157]: I0904 20:28:31.800641 2157 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 20:28:31.802556 kubelet[2157]: I0904 20:28:31.802505 2157 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 20:28:31.802847 kubelet[2157]: I0904 20:28:31.802724 2157 kubelet.go:2303] "Starting kubelet main sync loop" Sep 4 20:28:31.802847 kubelet[2157]: E0904 20:28:31.802815 2157 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 20:28:31.803977 kubelet[2157]: I0904 20:28:31.803863 2157 state_mem.go:36] "Initialized new in-memory state store" Sep 4 20:28:31.806934 kubelet[2157]: W0904 20:28:31.806823 2157 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://143.198.146.52:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 143.198.146.52:6443: connect: connection refused Sep 4 20:28:31.806934 kubelet[2157]: E0904 20:28:31.806898 2157 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://143.198.146.52:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 143.198.146.52:6443: connect: connection refused Sep 4 20:28:31.811161 kubelet[2157]: I0904 20:28:31.810693 2157 policy_none.go:49] "None policy: Start" Sep 4 20:28:31.813473 kubelet[2157]: I0904 20:28:31.813358 2157 memory_manager.go:169] "Starting memorymanager" policy="None" Sep 4 20:28:31.813473 kubelet[2157]: I0904 20:28:31.813395 2157 state_mem.go:35] "Initializing new in-memory state store" Sep 4 20:28:31.823169 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 4 20:28:31.837287 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 4 20:28:31.841590 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 4 20:28:31.848884 kubelet[2157]: I0904 20:28:31.848289 2157 kubelet_node_status.go:70] "Attempting to register node" node="ci-3975.2.1-5-b3ba9b7107" Sep 4 20:28:31.848884 kubelet[2157]: E0904 20:28:31.848761 2157 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://143.198.146.52:6443/api/v1/nodes\": dial tcp 143.198.146.52:6443: connect: connection refused" node="ci-3975.2.1-5-b3ba9b7107" Sep 4 20:28:31.851009 kubelet[2157]: I0904 20:28:31.850970 2157 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 20:28:31.851501 kubelet[2157]: I0904 20:28:31.851449 2157 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 20:28:31.852884 kubelet[2157]: E0904 20:28:31.852758 2157 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3975.2.1-5-b3ba9b7107\" not found" Sep 4 20:28:31.904049 kubelet[2157]: I0904 20:28:31.903971 2157 topology_manager.go:215] "Topology Admit Handler" podUID="ae1c5ef71d01e526822afa281813df27" podNamespace="kube-system" podName="kube-apiserver-ci-3975.2.1-5-b3ba9b7107" Sep 4 20:28:31.905838 kubelet[2157]: I0904 20:28:31.905406 2157 topology_manager.go:215] "Topology Admit Handler" podUID="8c4d1672a6c8505556ffeb9f822b2817" podNamespace="kube-system" podName="kube-controller-manager-ci-3975.2.1-5-b3ba9b7107" Sep 4 20:28:31.906378 kubelet[2157]: I0904 20:28:31.906358 2157 topology_manager.go:215] "Topology Admit Handler" podUID="75594b5dc8d075ceda2f96ee003f6e17" podNamespace="kube-system" podName="kube-scheduler-ci-3975.2.1-5-b3ba9b7107" Sep 4 20:28:31.916580 systemd[1]: Created slice kubepods-burstable-podae1c5ef71d01e526822afa281813df27.slice - libcontainer container kubepods-burstable-podae1c5ef71d01e526822afa281813df27.slice. Sep 4 20:28:31.937639 systemd[1]: Created slice kubepods-burstable-pod8c4d1672a6c8505556ffeb9f822b2817.slice - libcontainer container kubepods-burstable-pod8c4d1672a6c8505556ffeb9f822b2817.slice. Sep 4 20:28:31.947667 kubelet[2157]: I0904 20:28:31.947601 2157 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ae1c5ef71d01e526822afa281813df27-ca-certs\") pod \"kube-apiserver-ci-3975.2.1-5-b3ba9b7107\" (UID: \"ae1c5ef71d01e526822afa281813df27\") " pod="kube-system/kube-apiserver-ci-3975.2.1-5-b3ba9b7107" Sep 4 20:28:31.947667 kubelet[2157]: I0904 20:28:31.947684 2157 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8c4d1672a6c8505556ffeb9f822b2817-flexvolume-dir\") pod \"kube-controller-manager-ci-3975.2.1-5-b3ba9b7107\" (UID: \"8c4d1672a6c8505556ffeb9f822b2817\") " pod="kube-system/kube-controller-manager-ci-3975.2.1-5-b3ba9b7107" Sep 4 20:28:31.947858 kubelet[2157]: I0904 20:28:31.947713 2157 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8c4d1672a6c8505556ffeb9f822b2817-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3975.2.1-5-b3ba9b7107\" (UID: \"8c4d1672a6c8505556ffeb9f822b2817\") " pod="kube-system/kube-controller-manager-ci-3975.2.1-5-b3ba9b7107" Sep 4 20:28:31.947858 kubelet[2157]: I0904 20:28:31.947762 2157 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ae1c5ef71d01e526822afa281813df27-k8s-certs\") pod \"kube-apiserver-ci-3975.2.1-5-b3ba9b7107\" (UID: \"ae1c5ef71d01e526822afa281813df27\") " pod="kube-system/kube-apiserver-ci-3975.2.1-5-b3ba9b7107" Sep 4 20:28:31.947858 kubelet[2157]: I0904 20:28:31.947784 2157 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ae1c5ef71d01e526822afa281813df27-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3975.2.1-5-b3ba9b7107\" (UID: \"ae1c5ef71d01e526822afa281813df27\") " pod="kube-system/kube-apiserver-ci-3975.2.1-5-b3ba9b7107" Sep 4 20:28:31.947858 kubelet[2157]: I0904 20:28:31.947804 2157 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8c4d1672a6c8505556ffeb9f822b2817-ca-certs\") pod \"kube-controller-manager-ci-3975.2.1-5-b3ba9b7107\" (UID: \"8c4d1672a6c8505556ffeb9f822b2817\") " pod="kube-system/kube-controller-manager-ci-3975.2.1-5-b3ba9b7107" Sep 4 20:28:31.947858 kubelet[2157]: I0904 20:28:31.947822 2157 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8c4d1672a6c8505556ffeb9f822b2817-k8s-certs\") pod \"kube-controller-manager-ci-3975.2.1-5-b3ba9b7107\" (UID: \"8c4d1672a6c8505556ffeb9f822b2817\") " pod="kube-system/kube-controller-manager-ci-3975.2.1-5-b3ba9b7107" Sep 4 20:28:31.947984 kubelet[2157]: I0904 20:28:31.947842 2157 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8c4d1672a6c8505556ffeb9f822b2817-kubeconfig\") pod \"kube-controller-manager-ci-3975.2.1-5-b3ba9b7107\" (UID: \"8c4d1672a6c8505556ffeb9f822b2817\") " pod="kube-system/kube-controller-manager-ci-3975.2.1-5-b3ba9b7107" Sep 4 20:28:31.947984 kubelet[2157]: I0904 20:28:31.947862 2157 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/75594b5dc8d075ceda2f96ee003f6e17-kubeconfig\") pod \"kube-scheduler-ci-3975.2.1-5-b3ba9b7107\" (UID: \"75594b5dc8d075ceda2f96ee003f6e17\") " pod="kube-system/kube-scheduler-ci-3975.2.1-5-b3ba9b7107" Sep 4 20:28:31.950635 kubelet[2157]: E0904 20:28:31.950599 2157 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.146.52:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.2.1-5-b3ba9b7107?timeout=10s\": dial tcp 143.198.146.52:6443: connect: connection refused" interval="400ms" Sep 4 20:28:31.954635 systemd[1]: Created slice kubepods-burstable-pod75594b5dc8d075ceda2f96ee003f6e17.slice - libcontainer container kubepods-burstable-pod75594b5dc8d075ceda2f96ee003f6e17.slice. Sep 4 20:28:32.050363 kubelet[2157]: I0904 20:28:32.049954 2157 kubelet_node_status.go:70] "Attempting to register node" node="ci-3975.2.1-5-b3ba9b7107" Sep 4 20:28:32.050818 kubelet[2157]: E0904 20:28:32.050504 2157 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://143.198.146.52:6443/api/v1/nodes\": dial tcp 143.198.146.52:6443: connect: connection refused" node="ci-3975.2.1-5-b3ba9b7107" Sep 4 20:28:32.236900 kubelet[2157]: E0904 20:28:32.236807 2157 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 20:28:32.238054 containerd[1468]: time="2024-09-04T20:28:32.237976819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3975.2.1-5-b3ba9b7107,Uid:ae1c5ef71d01e526822afa281813df27,Namespace:kube-system,Attempt:0,}" Sep 4 20:28:32.251823 kubelet[2157]: E0904 20:28:32.251777 2157 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 20:28:32.259143 kubelet[2157]: E0904 20:28:32.258140 2157 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 20:28:32.265392 containerd[1468]: time="2024-09-04T20:28:32.265160873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3975.2.1-5-b3ba9b7107,Uid:75594b5dc8d075ceda2f96ee003f6e17,Namespace:kube-system,Attempt:0,}" Sep 4 20:28:32.268102 containerd[1468]: time="2024-09-04T20:28:32.265909887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3975.2.1-5-b3ba9b7107,Uid:8c4d1672a6c8505556ffeb9f822b2817,Namespace:kube-system,Attempt:0,}" Sep 4 20:28:32.351500 kubelet[2157]: E0904 20:28:32.351454 2157 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.146.52:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.2.1-5-b3ba9b7107?timeout=10s\": dial tcp 143.198.146.52:6443: connect: connection refused" interval="800ms" Sep 4 20:28:32.452625 kubelet[2157]: I0904 20:28:32.452574 2157 kubelet_node_status.go:70] "Attempting to register node" node="ci-3975.2.1-5-b3ba9b7107" Sep 4 20:28:32.453012 kubelet[2157]: E0904 20:28:32.452993 2157 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://143.198.146.52:6443/api/v1/nodes\": dial tcp 143.198.146.52:6443: connect: connection refused" node="ci-3975.2.1-5-b3ba9b7107" Sep 4 20:28:32.545308 kubelet[2157]: W0904 20:28:32.545005 2157 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://143.198.146.52:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975.2.1-5-b3ba9b7107&limit=500&resourceVersion=0": dial tcp 143.198.146.52:6443: connect: connection refused Sep 4 20:28:32.545308 kubelet[2157]: E0904 20:28:32.545133 2157 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://143.198.146.52:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975.2.1-5-b3ba9b7107&limit=500&resourceVersion=0": dial tcp 143.198.146.52:6443: connect: connection refused Sep 4 20:28:32.809928 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3426230759.mount: Deactivated successfully. Sep 4 20:28:32.817126 containerd[1468]: time="2024-09-04T20:28:32.816127447Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 20:28:32.818326 containerd[1468]: time="2024-09-04T20:28:32.818272991Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Sep 4 20:28:32.819628 containerd[1468]: time="2024-09-04T20:28:32.819594990Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 20:28:32.820873 containerd[1468]: time="2024-09-04T20:28:32.820821208Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 20:28:32.821709 containerd[1468]: time="2024-09-04T20:28:32.821668768Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 20:28:32.824325 containerd[1468]: time="2024-09-04T20:28:32.824224645Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 20:28:32.824942 containerd[1468]: time="2024-09-04T20:28:32.824891674Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 20:28:32.829019 containerd[1468]: time="2024-09-04T20:28:32.828945405Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 20:28:32.831099 containerd[1468]: time="2024-09-04T20:28:32.829483582Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 591.343458ms" Sep 4 20:28:32.834003 containerd[1468]: time="2024-09-04T20:28:32.833951352Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 568.615718ms" Sep 4 20:28:32.836197 containerd[1468]: time="2024-09-04T20:28:32.836137641Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 570.128479ms" Sep 4 20:28:32.873148 kubelet[2157]: W0904 20:28:32.871951 2157 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://143.198.146.52:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 143.198.146.52:6443: connect: connection refused Sep 4 20:28:32.873148 kubelet[2157]: E0904 20:28:32.872038 2157 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://143.198.146.52:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 143.198.146.52:6443: connect: connection refused Sep 4 20:28:32.907290 kubelet[2157]: W0904 20:28:32.900665 2157 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://143.198.146.52:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 143.198.146.52:6443: connect: connection refused Sep 4 20:28:32.907290 kubelet[2157]: E0904 20:28:32.900746 2157 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://143.198.146.52:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 143.198.146.52:6443: connect: connection refused Sep 4 20:28:33.021714 kubelet[2157]: W0904 20:28:33.021541 2157 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://143.198.146.52:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 143.198.146.52:6443: connect: connection refused Sep 4 20:28:33.021714 kubelet[2157]: E0904 20:28:33.021645 2157 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://143.198.146.52:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 143.198.146.52:6443: connect: connection refused Sep 4 20:28:33.052803 containerd[1468]: time="2024-09-04T20:28:33.052270600Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 20:28:33.052803 containerd[1468]: time="2024-09-04T20:28:33.052376237Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 20:28:33.052803 containerd[1468]: time="2024-09-04T20:28:33.052412156Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 20:28:33.052803 containerd[1468]: time="2024-09-04T20:28:33.052437135Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 20:28:33.058804 containerd[1468]: time="2024-09-04T20:28:33.057870360Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 20:28:33.058804 containerd[1468]: time="2024-09-04T20:28:33.057968004Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 20:28:33.058804 containerd[1468]: time="2024-09-04T20:28:33.058022755Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 20:28:33.063710 containerd[1468]: time="2024-09-04T20:28:33.062179012Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 20:28:33.069116 containerd[1468]: time="2024-09-04T20:28:33.068936861Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 20:28:33.069341 containerd[1468]: time="2024-09-04T20:28:33.069038561Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 20:28:33.069411 containerd[1468]: time="2024-09-04T20:28:33.069316652Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 20:28:33.069411 containerd[1468]: time="2024-09-04T20:28:33.069350827Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 20:28:33.137081 systemd[1]: Started cri-containerd-694bb263de76aeab9c8f84aa0e1cef42b746c4bca6b484fc9f96c4e12dd6f73a.scope - libcontainer container 694bb263de76aeab9c8f84aa0e1cef42b746c4bca6b484fc9f96c4e12dd6f73a. Sep 4 20:28:33.140280 systemd[1]: Started cri-containerd-a6561bb9ce71317e1cdf44d261cd1bdc0bd7b3270a557411418131fcdf24de2a.scope - libcontainer container a6561bb9ce71317e1cdf44d261cd1bdc0bd7b3270a557411418131fcdf24de2a. Sep 4 20:28:33.147713 systemd[1]: Started cri-containerd-974af319525c210a4617f7babae14a1c76594e83d6a3daced6c239492245fa47.scope - libcontainer container 974af319525c210a4617f7babae14a1c76594e83d6a3daced6c239492245fa47. Sep 4 20:28:33.153129 kubelet[2157]: E0904 20:28:33.152959 2157 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.146.52:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.2.1-5-b3ba9b7107?timeout=10s\": dial tcp 143.198.146.52:6443: connect: connection refused" interval="1.6s" Sep 4 20:28:33.256690 kubelet[2157]: I0904 20:28:33.256647 2157 kubelet_node_status.go:70] "Attempting to register node" node="ci-3975.2.1-5-b3ba9b7107" Sep 4 20:28:33.258330 kubelet[2157]: E0904 20:28:33.257008 2157 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://143.198.146.52:6443/api/v1/nodes\": dial tcp 143.198.146.52:6443: connect: connection refused" node="ci-3975.2.1-5-b3ba9b7107" Sep 4 20:28:33.263543 containerd[1468]: time="2024-09-04T20:28:33.263490624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3975.2.1-5-b3ba9b7107,Uid:75594b5dc8d075ceda2f96ee003f6e17,Namespace:kube-system,Attempt:0,} returns sandbox id \"a6561bb9ce71317e1cdf44d261cd1bdc0bd7b3270a557411418131fcdf24de2a\"" Sep 4 20:28:33.266812 kubelet[2157]: E0904 20:28:33.266768 2157 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 20:28:33.270228 containerd[1468]: time="2024-09-04T20:28:33.270178253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3975.2.1-5-b3ba9b7107,Uid:ae1c5ef71d01e526822afa281813df27,Namespace:kube-system,Attempt:0,} returns sandbox id \"694bb263de76aeab9c8f84aa0e1cef42b746c4bca6b484fc9f96c4e12dd6f73a\"" Sep 4 20:28:33.272237 kubelet[2157]: E0904 20:28:33.271823 2157 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 20:28:33.275789 containerd[1468]: time="2024-09-04T20:28:33.275713039Z" level=info msg="CreateContainer within sandbox \"a6561bb9ce71317e1cdf44d261cd1bdc0bd7b3270a557411418131fcdf24de2a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 4 20:28:33.277370 containerd[1468]: time="2024-09-04T20:28:33.277319976Z" level=info msg="CreateContainer within sandbox \"694bb263de76aeab9c8f84aa0e1cef42b746c4bca6b484fc9f96c4e12dd6f73a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 4 20:28:33.281814 containerd[1468]: time="2024-09-04T20:28:33.281769769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3975.2.1-5-b3ba9b7107,Uid:8c4d1672a6c8505556ffeb9f822b2817,Namespace:kube-system,Attempt:0,} returns sandbox id \"974af319525c210a4617f7babae14a1c76594e83d6a3daced6c239492245fa47\"" Sep 4 20:28:33.283216 kubelet[2157]: E0904 20:28:33.283167 2157 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 20:28:33.288111 containerd[1468]: time="2024-09-04T20:28:33.288012837Z" level=info msg="CreateContainer within sandbox \"974af319525c210a4617f7babae14a1c76594e83d6a3daced6c239492245fa47\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 4 20:28:33.305427 containerd[1468]: time="2024-09-04T20:28:33.305340389Z" level=info msg="CreateContainer within sandbox \"a6561bb9ce71317e1cdf44d261cd1bdc0bd7b3270a557411418131fcdf24de2a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"626ab1f7b84b57d8197d86eb9aadc3050d4a58e9a0b21d65ab82954d881ca8e2\"" Sep 4 20:28:33.306898 containerd[1468]: time="2024-09-04T20:28:33.306828552Z" level=info msg="StartContainer for \"626ab1f7b84b57d8197d86eb9aadc3050d4a58e9a0b21d65ab82954d881ca8e2\"" Sep 4 20:28:33.320002 containerd[1468]: time="2024-09-04T20:28:33.319790745Z" level=info msg="CreateContainer within sandbox \"694bb263de76aeab9c8f84aa0e1cef42b746c4bca6b484fc9f96c4e12dd6f73a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c34b6b9c6dbe3ae675a7ecf7f3e070f2eb36e1924c9df80f23b1c9aea6ab734b\"" Sep 4 20:28:33.321778 containerd[1468]: time="2024-09-04T20:28:33.321704532Z" level=info msg="StartContainer for \"c34b6b9c6dbe3ae675a7ecf7f3e070f2eb36e1924c9df80f23b1c9aea6ab734b\"" Sep 4 20:28:33.325446 containerd[1468]: time="2024-09-04T20:28:33.325312679Z" level=info msg="CreateContainer within sandbox \"974af319525c210a4617f7babae14a1c76594e83d6a3daced6c239492245fa47\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"47d696d574903f7fed86bfd8d085cd173ad0d90060358b7e881e345fefe7f2b2\"" Sep 4 20:28:33.326023 containerd[1468]: time="2024-09-04T20:28:33.325842817Z" level=info msg="StartContainer for \"47d696d574903f7fed86bfd8d085cd173ad0d90060358b7e881e345fefe7f2b2\"" Sep 4 20:28:33.367011 systemd[1]: Started cri-containerd-626ab1f7b84b57d8197d86eb9aadc3050d4a58e9a0b21d65ab82954d881ca8e2.scope - libcontainer container 626ab1f7b84b57d8197d86eb9aadc3050d4a58e9a0b21d65ab82954d881ca8e2. Sep 4 20:28:33.376361 systemd[1]: Started cri-containerd-c34b6b9c6dbe3ae675a7ecf7f3e070f2eb36e1924c9df80f23b1c9aea6ab734b.scope - libcontainer container c34b6b9c6dbe3ae675a7ecf7f3e070f2eb36e1924c9df80f23b1c9aea6ab734b. Sep 4 20:28:33.397057 systemd[1]: Started cri-containerd-47d696d574903f7fed86bfd8d085cd173ad0d90060358b7e881e345fefe7f2b2.scope - libcontainer container 47d696d574903f7fed86bfd8d085cd173ad0d90060358b7e881e345fefe7f2b2. Sep 4 20:28:33.471306 containerd[1468]: time="2024-09-04T20:28:33.470962118Z" level=info msg="StartContainer for \"c34b6b9c6dbe3ae675a7ecf7f3e070f2eb36e1924c9df80f23b1c9aea6ab734b\" returns successfully" Sep 4 20:28:33.517426 containerd[1468]: time="2024-09-04T20:28:33.517359291Z" level=info msg="StartContainer for \"626ab1f7b84b57d8197d86eb9aadc3050d4a58e9a0b21d65ab82954d881ca8e2\" returns successfully" Sep 4 20:28:33.521998 containerd[1468]: time="2024-09-04T20:28:33.521906940Z" level=info msg="StartContainer for \"47d696d574903f7fed86bfd8d085cd173ad0d90060358b7e881e345fefe7f2b2\" returns successfully" Sep 4 20:28:33.833140 kubelet[2157]: E0904 20:28:33.832936 2157 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 20:28:33.839330 kubelet[2157]: E0904 20:28:33.839023 2157 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 20:28:33.841710 kubelet[2157]: E0904 20:28:33.841664 2157 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 20:28:34.845558 kubelet[2157]: E0904 20:28:34.845256 2157 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 20:28:34.851011 kubelet[2157]: E0904 20:28:34.850565 2157 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 20:28:34.851011 kubelet[2157]: E0904 20:28:34.850735 2157 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 20:28:34.860135 kubelet[2157]: I0904 20:28:34.859041 2157 kubelet_node_status.go:70] "Attempting to register node" node="ci-3975.2.1-5-b3ba9b7107" Sep 4 20:28:35.644209 kubelet[2157]: E0904 20:28:35.644166 2157 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3975.2.1-5-b3ba9b7107\" not found" node="ci-3975.2.1-5-b3ba9b7107" Sep 4 20:28:35.688773 kubelet[2157]: I0904 20:28:35.688702 2157 kubelet_node_status.go:73] "Successfully registered node" node="ci-3975.2.1-5-b3ba9b7107" Sep 4 20:28:35.723780 kubelet[2157]: I0904 20:28:35.723453 2157 apiserver.go:52] "Watching apiserver" Sep 4 20:28:35.748100 kubelet[2157]: I0904 20:28:35.746848 2157 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Sep 4 20:28:35.856864 kubelet[2157]: E0904 20:28:35.856813 2157 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3975.2.1-5-b3ba9b7107\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-3975.2.1-5-b3ba9b7107" Sep 4 20:28:35.857442 kubelet[2157]: E0904 20:28:35.857394 2157 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 20:28:38.561483 systemd[1]: Reloading requested from client PID 2435 ('systemctl') (unit session-9.scope)... Sep 4 20:28:38.561984 systemd[1]: Reloading... Sep 4 20:28:38.684110 zram_generator::config[2478]: No configuration found. Sep 4 20:28:38.800360 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 20:28:38.894484 systemd[1]: Reloading finished in 331 ms. Sep 4 20:28:38.939497 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 20:28:38.940623 kubelet[2157]: I0904 20:28:38.939678 2157 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 20:28:38.950880 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 20:28:38.951174 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 20:28:38.951246 systemd[1]: kubelet.service: Consumed 1.303s CPU time, 111.5M memory peak, 0B memory swap peak. Sep 4 20:28:38.957481 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 20:28:39.138016 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 20:28:39.151133 (kubelet)[2523]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 20:28:39.240130 kubelet[2523]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 20:28:39.240525 kubelet[2523]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 20:28:39.240681 kubelet[2523]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 20:28:39.240785 kubelet[2523]: I0904 20:28:39.240752 2523 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 20:28:39.247347 kubelet[2523]: I0904 20:28:39.247286 2523 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Sep 4 20:28:39.247347 kubelet[2523]: I0904 20:28:39.247327 2523 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 20:28:39.247642 kubelet[2523]: I0904 20:28:39.247586 2523 server.go:895] "Client rotation is on, will bootstrap in background" Sep 4 20:28:39.249525 kubelet[2523]: I0904 20:28:39.249495 2523 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 4 20:28:39.251851 kubelet[2523]: I0904 20:28:39.250789 2523 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 20:28:39.263827 kubelet[2523]: I0904 20:28:39.263109 2523 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 20:28:39.263827 kubelet[2523]: I0904 20:28:39.263435 2523 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 20:28:39.264060 kubelet[2523]: I0904 20:28:39.263892 2523 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Sep 4 20:28:39.264060 kubelet[2523]: I0904 20:28:39.263937 2523 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 20:28:39.264060 kubelet[2523]: I0904 20:28:39.263956 2523 container_manager_linux.go:301] "Creating device plugin manager" Sep 4 20:28:39.264060 kubelet[2523]: I0904 20:28:39.264007 2523 state_mem.go:36] "Initialized new in-memory state store" Sep 4 20:28:39.265400 kubelet[2523]: I0904 20:28:39.264250 2523 kubelet.go:393] "Attempting to sync node with API server" Sep 4 20:28:39.265400 kubelet[2523]: I0904 20:28:39.264271 2523 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 20:28:39.265400 kubelet[2523]: I0904 20:28:39.264298 2523 kubelet.go:309] "Adding apiserver pod source" Sep 4 20:28:39.265400 kubelet[2523]: I0904 20:28:39.264329 2523 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 20:28:39.268907 kubelet[2523]: I0904 20:28:39.268881 2523 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Sep 4 20:28:39.270133 kubelet[2523]: I0904 20:28:39.269647 2523 server.go:1232] "Started kubelet" Sep 4 20:28:39.277987 kubelet[2523]: I0904 20:28:39.274383 2523 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Sep 4 20:28:39.277987 kubelet[2523]: I0904 20:28:39.274671 2523 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 20:28:39.277987 kubelet[2523]: I0904 20:28:39.274729 2523 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 20:28:39.281870 kubelet[2523]: E0904 20:28:39.281824 2523 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Sep 4 20:28:39.281870 kubelet[2523]: E0904 20:28:39.281866 2523 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 20:28:39.284039 kubelet[2523]: I0904 20:28:39.282462 2523 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 20:28:39.295177 kubelet[2523]: I0904 20:28:39.294437 2523 server.go:462] "Adding debug handlers to kubelet server" Sep 4 20:28:39.298197 kubelet[2523]: I0904 20:28:39.297738 2523 volume_manager.go:291] "Starting Kubelet Volume Manager" Sep 4 20:28:39.302624 kubelet[2523]: I0904 20:28:39.300037 2523 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Sep 4 20:28:39.302624 kubelet[2523]: I0904 20:28:39.300273 2523 reconciler_new.go:29] "Reconciler: start to sync state" Sep 4 20:28:39.350685 kubelet[2523]: I0904 20:28:39.350631 2523 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 20:28:39.355050 kubelet[2523]: I0904 20:28:39.355009 2523 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 20:28:39.356081 kubelet[2523]: I0904 20:28:39.356044 2523 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 20:28:39.356165 kubelet[2523]: I0904 20:28:39.356146 2523 kubelet.go:2303] "Starting kubelet main sync loop" Sep 4 20:28:39.356291 kubelet[2523]: E0904 20:28:39.356277 2523 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 20:28:39.399613 kubelet[2523]: I0904 20:28:39.399587 2523 kubelet_node_status.go:70] "Attempting to register node" node="ci-3975.2.1-5-b3ba9b7107" Sep 4 20:28:39.413116 kubelet[2523]: I0904 20:28:39.412982 2523 kubelet_node_status.go:108] "Node was previously registered" node="ci-3975.2.1-5-b3ba9b7107" Sep 4 20:28:39.414755 kubelet[2523]: I0904 20:28:39.414524 2523 kubelet_node_status.go:73] "Successfully registered node" node="ci-3975.2.1-5-b3ba9b7107" Sep 4 20:28:39.442141 kubelet[2523]: I0904 20:28:39.441829 2523 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 20:28:39.442141 kubelet[2523]: I0904 20:28:39.441850 2523 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 20:28:39.442141 kubelet[2523]: I0904 20:28:39.441869 2523 state_mem.go:36] "Initialized new in-memory state store" Sep 4 20:28:39.442141 kubelet[2523]: I0904 20:28:39.442029 2523 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 4 20:28:39.442141 kubelet[2523]: I0904 20:28:39.442050 2523 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 4 20:28:39.442141 kubelet[2523]: I0904 20:28:39.442057 2523 policy_none.go:49] "None policy: Start" Sep 4 20:28:39.443639 kubelet[2523]: I0904 20:28:39.443323 2523 memory_manager.go:169] "Starting memorymanager" policy="None" Sep 4 20:28:39.443639 kubelet[2523]: I0904 20:28:39.443350 2523 state_mem.go:35] "Initializing new in-memory state store" Sep 4 20:28:39.443997 kubelet[2523]: I0904 20:28:39.443916 2523 state_mem.go:75] "Updated machine memory state" Sep 4 20:28:39.451760 kubelet[2523]: I0904 20:28:39.451733 2523 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 20:28:39.456095 kubelet[2523]: I0904 20:28:39.455928 2523 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 20:28:39.457521 kubelet[2523]: I0904 20:28:39.456347 2523 topology_manager.go:215] "Topology Admit Handler" podUID="ae1c5ef71d01e526822afa281813df27" podNamespace="kube-system" podName="kube-apiserver-ci-3975.2.1-5-b3ba9b7107" Sep 4 20:28:39.457521 kubelet[2523]: I0904 20:28:39.456498 2523 topology_manager.go:215] "Topology Admit Handler" podUID="8c4d1672a6c8505556ffeb9f822b2817" podNamespace="kube-system" podName="kube-controller-manager-ci-3975.2.1-5-b3ba9b7107" Sep 4 20:28:39.457521 kubelet[2523]: I0904 20:28:39.456600 2523 topology_manager.go:215] "Topology Admit Handler" podUID="75594b5dc8d075ceda2f96ee003f6e17" podNamespace="kube-system" podName="kube-scheduler-ci-3975.2.1-5-b3ba9b7107" Sep 4 20:28:39.479221 kubelet[2523]: W0904 20:28:39.476594 2523 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 4 20:28:39.479221 kubelet[2523]: W0904 20:28:39.476773 2523 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 4 20:28:39.479221 kubelet[2523]: W0904 20:28:39.477976 2523 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 4 20:28:39.503109 kubelet[2523]: I0904 20:28:39.501769 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8c4d1672a6c8505556ffeb9f822b2817-flexvolume-dir\") pod \"kube-controller-manager-ci-3975.2.1-5-b3ba9b7107\" (UID: \"8c4d1672a6c8505556ffeb9f822b2817\") " pod="kube-system/kube-controller-manager-ci-3975.2.1-5-b3ba9b7107" Sep 4 20:28:39.503109 kubelet[2523]: I0904 20:28:39.501840 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8c4d1672a6c8505556ffeb9f822b2817-k8s-certs\") pod \"kube-controller-manager-ci-3975.2.1-5-b3ba9b7107\" (UID: \"8c4d1672a6c8505556ffeb9f822b2817\") " pod="kube-system/kube-controller-manager-ci-3975.2.1-5-b3ba9b7107" Sep 4 20:28:39.503109 kubelet[2523]: I0904 20:28:39.501874 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8c4d1672a6c8505556ffeb9f822b2817-kubeconfig\") pod \"kube-controller-manager-ci-3975.2.1-5-b3ba9b7107\" (UID: \"8c4d1672a6c8505556ffeb9f822b2817\") " pod="kube-system/kube-controller-manager-ci-3975.2.1-5-b3ba9b7107" Sep 4 20:28:39.503109 kubelet[2523]: I0904 20:28:39.501900 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8c4d1672a6c8505556ffeb9f822b2817-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3975.2.1-5-b3ba9b7107\" (UID: \"8c4d1672a6c8505556ffeb9f822b2817\") " pod="kube-system/kube-controller-manager-ci-3975.2.1-5-b3ba9b7107" Sep 4 20:28:39.503109 kubelet[2523]: I0904 20:28:39.501930 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/75594b5dc8d075ceda2f96ee003f6e17-kubeconfig\") pod \"kube-scheduler-ci-3975.2.1-5-b3ba9b7107\" (UID: \"75594b5dc8d075ceda2f96ee003f6e17\") " pod="kube-system/kube-scheduler-ci-3975.2.1-5-b3ba9b7107" Sep 4 20:28:39.503383 kubelet[2523]: I0904 20:28:39.501950 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ae1c5ef71d01e526822afa281813df27-ca-certs\") pod \"kube-apiserver-ci-3975.2.1-5-b3ba9b7107\" (UID: \"ae1c5ef71d01e526822afa281813df27\") " pod="kube-system/kube-apiserver-ci-3975.2.1-5-b3ba9b7107" Sep 4 20:28:39.503383 kubelet[2523]: I0904 20:28:39.501968 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ae1c5ef71d01e526822afa281813df27-k8s-certs\") pod \"kube-apiserver-ci-3975.2.1-5-b3ba9b7107\" (UID: \"ae1c5ef71d01e526822afa281813df27\") " pod="kube-system/kube-apiserver-ci-3975.2.1-5-b3ba9b7107" Sep 4 20:28:39.503383 kubelet[2523]: I0904 20:28:39.501989 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ae1c5ef71d01e526822afa281813df27-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3975.2.1-5-b3ba9b7107\" (UID: \"ae1c5ef71d01e526822afa281813df27\") " pod="kube-system/kube-apiserver-ci-3975.2.1-5-b3ba9b7107" Sep 4 20:28:39.503383 kubelet[2523]: I0904 20:28:39.502009 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8c4d1672a6c8505556ffeb9f822b2817-ca-certs\") pod \"kube-controller-manager-ci-3975.2.1-5-b3ba9b7107\" (UID: \"8c4d1672a6c8505556ffeb9f822b2817\") " pod="kube-system/kube-controller-manager-ci-3975.2.1-5-b3ba9b7107" Sep 4 20:28:39.779318 kubelet[2523]: E0904 20:28:39.779278 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 20:28:39.781686 kubelet[2523]: E0904 20:28:39.779820 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 20:28:39.781686 kubelet[2523]: E0904 20:28:39.779974 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 20:28:40.266376 kubelet[2523]: I0904 20:28:40.266106 2523 apiserver.go:52] "Watching apiserver" Sep 4 20:28:40.301123 kubelet[2523]: I0904 20:28:40.300400 2523 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Sep 4 20:28:40.389307 kubelet[2523]: E0904 20:28:40.389135 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 20:28:40.390236 kubelet[2523]: E0904 20:28:40.390142 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 20:28:40.413338 kubelet[2523]: W0904 20:28:40.408392 2523 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 4 20:28:40.413338 kubelet[2523]: E0904 20:28:40.408491 2523 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3975.2.1-5-b3ba9b7107\" already exists" pod="kube-system/kube-apiserver-ci-3975.2.1-5-b3ba9b7107" Sep 4 20:28:40.413338 kubelet[2523]: E0904 20:28:40.409236 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 20:28:40.465884 kubelet[2523]: I0904 20:28:40.465817 2523 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3975.2.1-5-b3ba9b7107" podStartSLOduration=1.464686429 podCreationTimestamp="2024-09-04 20:28:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 20:28:40.449972566 +0000 UTC m=+1.292954654" watchObservedRunningTime="2024-09-04 20:28:40.464686429 +0000 UTC m=+1.307668505" Sep 4 20:28:40.479609 kubelet[2523]: I0904 20:28:40.478571 2523 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3975.2.1-5-b3ba9b7107" podStartSLOduration=1.478516722 podCreationTimestamp="2024-09-04 20:28:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 20:28:40.466002546 +0000 UTC m=+1.308984607" watchObservedRunningTime="2024-09-04 20:28:40.478516722 +0000 UTC m=+1.321498811" Sep 4 20:28:40.495365 kubelet[2523]: I0904 20:28:40.495331 2523 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3975.2.1-5-b3ba9b7107" podStartSLOduration=1.495277319 podCreationTimestamp="2024-09-04 20:28:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 20:28:40.479155657 +0000 UTC m=+1.322137747" watchObservedRunningTime="2024-09-04 20:28:40.495277319 +0000 UTC m=+1.338259395" Sep 4 20:28:41.395076 kubelet[2523]: E0904 20:28:41.395001 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 20:28:42.163582 update_engine[1451]: I0904 20:28:42.163494 1451 update_attempter.cc:509] Updating boot flags... Sep 4 20:28:42.233125 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2574) Sep 4 20:28:42.359286 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2575) Sep 4 20:28:44.015232 kubelet[2523]: E0904 20:28:44.015153 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 20:28:44.406581 kubelet[2523]: E0904 20:28:44.406415 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 20:28:44.574423 kubelet[2523]: E0904 20:28:44.574122 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 20:28:44.706010 sudo[1666]: pam_unix(sudo:session): session closed for user root Sep 4 20:28:44.711258 sshd[1663]: pam_unix(sshd:session): session closed for user core Sep 4 20:28:44.716516 systemd[1]: sshd@8-143.198.146.52:22-139.178.68.195:49722.service: Deactivated successfully. Sep 4 20:28:44.719618 systemd[1]: session-9.scope: Deactivated successfully. Sep 4 20:28:44.720046 systemd[1]: session-9.scope: Consumed 6.476s CPU time, 136.9M memory peak, 0B memory swap peak. Sep 4 20:28:44.721294 systemd-logind[1447]: Session 9 logged out. Waiting for processes to exit. Sep 4 20:28:44.722492 systemd-logind[1447]: Removed session 9. Sep 4 20:28:45.409641 kubelet[2523]: E0904 20:28:45.409594 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 20:28:50.092859 kubelet[2523]: E0904 20:28:50.092456 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 20:28:53.235176 kubelet[2523]: I0904 20:28:53.233253 2523 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 4 20:28:53.236417 containerd[1468]: time="2024-09-04T20:28:53.235710291Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 4 20:28:53.238527 kubelet[2523]: I0904 20:28:53.236993 2523 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 4 20:28:53.495767 kubelet[2523]: I0904 20:28:53.495390 2523 topology_manager.go:215] "Topology Admit Handler" podUID="bead5e8a-51ae-4592-8796-32604b4a17d7" podNamespace="kube-system" podName="kube-proxy-vzdbx" Sep 4 20:28:53.514356 systemd[1]: Created slice kubepods-besteffort-podbead5e8a_51ae_4592_8796_32604b4a17d7.slice - libcontainer container kubepods-besteffort-podbead5e8a_51ae_4592_8796_32604b4a17d7.slice. Sep 4 20:28:53.585550 kubelet[2523]: I0904 20:28:53.585240 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms4zq\" (UniqueName: \"kubernetes.io/projected/bead5e8a-51ae-4592-8796-32604b4a17d7-kube-api-access-ms4zq\") pod \"kube-proxy-vzdbx\" (UID: \"bead5e8a-51ae-4592-8796-32604b4a17d7\") " pod="kube-system/kube-proxy-vzdbx" Sep 4 20:28:53.585550 kubelet[2523]: I0904 20:28:53.585324 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/bead5e8a-51ae-4592-8796-32604b4a17d7-kube-proxy\") pod \"kube-proxy-vzdbx\" (UID: \"bead5e8a-51ae-4592-8796-32604b4a17d7\") " pod="kube-system/kube-proxy-vzdbx" Sep 4 20:28:53.585550 kubelet[2523]: I0904 20:28:53.585358 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bead5e8a-51ae-4592-8796-32604b4a17d7-xtables-lock\") pod \"kube-proxy-vzdbx\" (UID: \"bead5e8a-51ae-4592-8796-32604b4a17d7\") " pod="kube-system/kube-proxy-vzdbx" Sep 4 20:28:53.585550 kubelet[2523]: I0904 20:28:53.585390 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bead5e8a-51ae-4592-8796-32604b4a17d7-lib-modules\") pod \"kube-proxy-vzdbx\" (UID: \"bead5e8a-51ae-4592-8796-32604b4a17d7\") " pod="kube-system/kube-proxy-vzdbx" Sep 4 20:28:53.703395 kubelet[2523]: E0904 20:28:53.703327 2523 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 4 20:28:53.703395 kubelet[2523]: E0904 20:28:53.703403 2523 projected.go:198] Error preparing data for projected volume kube-api-access-ms4zq for pod kube-system/kube-proxy-vzdbx: configmap "kube-root-ca.crt" not found Sep 4 20:28:53.706812 kubelet[2523]: E0904 20:28:53.706753 2523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bead5e8a-51ae-4592-8796-32604b4a17d7-kube-api-access-ms4zq podName:bead5e8a-51ae-4592-8796-32604b4a17d7 nodeName:}" failed. No retries permitted until 2024-09-04 20:28:54.203476178 +0000 UTC m=+15.046458258 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ms4zq" (UniqueName: "kubernetes.io/projected/bead5e8a-51ae-4592-8796-32604b4a17d7-kube-api-access-ms4zq") pod "kube-proxy-vzdbx" (UID: "bead5e8a-51ae-4592-8796-32604b4a17d7") : configmap "kube-root-ca.crt" not found Sep 4 20:28:53.872986 kubelet[2523]: I0904 20:28:53.870241 2523 topology_manager.go:215] "Topology Admit Handler" podUID="0fb0e85e-9c0a-42e4-8ba1-510c542fd7f1" podNamespace="tigera-operator" podName="tigera-operator-5d56685c77-rpmkp" Sep 4 20:28:53.884645 systemd[1]: Created slice kubepods-besteffort-pod0fb0e85e_9c0a_42e4_8ba1_510c542fd7f1.slice - libcontainer container kubepods-besteffort-pod0fb0e85e_9c0a_42e4_8ba1_510c542fd7f1.slice. Sep 4 20:28:53.990362 kubelet[2523]: I0904 20:28:53.990230 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0fb0e85e-9c0a-42e4-8ba1-510c542fd7f1-var-lib-calico\") pod \"tigera-operator-5d56685c77-rpmkp\" (UID: \"0fb0e85e-9c0a-42e4-8ba1-510c542fd7f1\") " pod="tigera-operator/tigera-operator-5d56685c77-rpmkp" Sep 4 20:28:53.990569 kubelet[2523]: I0904 20:28:53.990333 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcj8l\" (UniqueName: \"kubernetes.io/projected/0fb0e85e-9c0a-42e4-8ba1-510c542fd7f1-kube-api-access-jcj8l\") pod \"tigera-operator-5d56685c77-rpmkp\" (UID: \"0fb0e85e-9c0a-42e4-8ba1-510c542fd7f1\") " pod="tigera-operator/tigera-operator-5d56685c77-rpmkp" Sep 4 20:28:54.194491 containerd[1468]: time="2024-09-04T20:28:54.194116395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-rpmkp,Uid:0fb0e85e-9c0a-42e4-8ba1-510c542fd7f1,Namespace:tigera-operator,Attempt:0,}" Sep 4 20:28:54.250842 containerd[1468]: time="2024-09-04T20:28:54.250568073Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 20:28:54.250842 containerd[1468]: time="2024-09-04T20:28:54.250679738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 20:28:54.250842 containerd[1468]: time="2024-09-04T20:28:54.250721002Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 20:28:54.250842 containerd[1468]: time="2024-09-04T20:28:54.250752175Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 20:28:54.283892 systemd[1]: run-containerd-runc-k8s.io-8a4339cabdb2c8a9653361371af40aaa1dc83138c44de7e92bd34c11c92c17e5-runc.E0ypFr.mount: Deactivated successfully. Sep 4 20:28:54.298404 systemd[1]: Started cri-containerd-8a4339cabdb2c8a9653361371af40aaa1dc83138c44de7e92bd34c11c92c17e5.scope - libcontainer container 8a4339cabdb2c8a9653361371af40aaa1dc83138c44de7e92bd34c11c92c17e5. Sep 4 20:28:54.370521 containerd[1468]: time="2024-09-04T20:28:54.370433752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-rpmkp,Uid:0fb0e85e-9c0a-42e4-8ba1-510c542fd7f1,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"8a4339cabdb2c8a9653361371af40aaa1dc83138c44de7e92bd34c11c92c17e5\"" Sep 4 20:28:54.376106 containerd[1468]: time="2024-09-04T20:28:54.375005234Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\"" Sep 4 20:28:54.428298 kubelet[2523]: E0904 20:28:54.428245 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 20:28:54.431732 containerd[1468]: time="2024-09-04T20:28:54.431032366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vzdbx,Uid:bead5e8a-51ae-4592-8796-32604b4a17d7,Namespace:kube-system,Attempt:0,}" Sep 4 20:28:54.464910 containerd[1468]: time="2024-09-04T20:28:54.464704579Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 20:28:54.464910 containerd[1468]: time="2024-09-04T20:28:54.464804788Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 20:28:54.464910 containerd[1468]: time="2024-09-04T20:28:54.464828002Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 20:28:54.465321 containerd[1468]: time="2024-09-04T20:28:54.464997295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 20:28:54.489967 systemd[1]: Started cri-containerd-9ce673703893c3aaa3ec9eae71944d1b586daa3a02f9ccacc6725a84f989a0fa.scope - libcontainer container 9ce673703893c3aaa3ec9eae71944d1b586daa3a02f9ccacc6725a84f989a0fa. Sep 4 20:28:54.528921 containerd[1468]: time="2024-09-04T20:28:54.528823973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vzdbx,Uid:bead5e8a-51ae-4592-8796-32604b4a17d7,Namespace:kube-system,Attempt:0,} returns sandbox id \"9ce673703893c3aaa3ec9eae71944d1b586daa3a02f9ccacc6725a84f989a0fa\"" Sep 4 20:28:54.530168 kubelet[2523]: E0904 20:28:54.530137 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 20:28:54.536556 containerd[1468]: time="2024-09-04T20:28:54.536481452Z" level=info msg="CreateContainer within sandbox \"9ce673703893c3aaa3ec9eae71944d1b586daa3a02f9ccacc6725a84f989a0fa\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 4 20:28:54.559621 containerd[1468]: time="2024-09-04T20:28:54.559560222Z" level=info msg="CreateContainer within sandbox \"9ce673703893c3aaa3ec9eae71944d1b586daa3a02f9ccacc6725a84f989a0fa\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"bea8b0607c09def11739cde8be1a59ebcf2f31795a99bbd5d1179f13036709f9\"" Sep 4 20:28:54.562396 containerd[1468]: time="2024-09-04T20:28:54.561483437Z" level=info msg="StartContainer for \"bea8b0607c09def11739cde8be1a59ebcf2f31795a99bbd5d1179f13036709f9\"" Sep 4 20:28:54.610408 systemd[1]: Started cri-containerd-bea8b0607c09def11739cde8be1a59ebcf2f31795a99bbd5d1179f13036709f9.scope - libcontainer container bea8b0607c09def11739cde8be1a59ebcf2f31795a99bbd5d1179f13036709f9. Sep 4 20:28:54.661019 containerd[1468]: time="2024-09-04T20:28:54.660784675Z" level=info msg="StartContainer for \"bea8b0607c09def11739cde8be1a59ebcf2f31795a99bbd5d1179f13036709f9\" returns successfully" Sep 4 20:28:55.432200 kubelet[2523]: E0904 20:28:55.432118 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 20:28:55.450554 kubelet[2523]: I0904 20:28:55.450468 2523 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-vzdbx" podStartSLOduration=2.450304236 podCreationTimestamp="2024-09-04 20:28:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 20:28:55.449626597 +0000 UTC m=+16.292608686" watchObservedRunningTime="2024-09-04 20:28:55.450304236 +0000 UTC m=+16.293286318" Sep 4 20:28:55.717935 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4091963559.mount: Deactivated successfully. Sep 4 20:28:56.627814 containerd[1468]: time="2024-09-04T20:28:56.627733518Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:28:56.629419 containerd[1468]: time="2024-09-04T20:28:56.629335344Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.3: active requests=0, bytes read=22136565" Sep 4 20:28:56.632085 containerd[1468]: time="2024-09-04T20:28:56.630243074Z" level=info msg="ImageCreate event name:\"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:28:56.633143 containerd[1468]: time="2024-09-04T20:28:56.633103220Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:28:56.634622 containerd[1468]: time="2024-09-04T20:28:56.634580729Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.3\" with image id \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\", repo tag \"quay.io/tigera/operator:v1.34.3\", repo digest \"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\", size \"22130728\" in 2.259508899s" Sep 4 20:28:56.634865 containerd[1468]: time="2024-09-04T20:28:56.634837776Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\" returns image reference \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\"" Sep 4 20:28:56.645111 containerd[1468]: time="2024-09-04T20:28:56.645051017Z" level=info msg="CreateContainer within sandbox \"8a4339cabdb2c8a9653361371af40aaa1dc83138c44de7e92bd34c11c92c17e5\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 4 20:28:56.668919 containerd[1468]: time="2024-09-04T20:28:56.668806481Z" level=info msg="CreateContainer within sandbox \"8a4339cabdb2c8a9653361371af40aaa1dc83138c44de7e92bd34c11c92c17e5\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"3cadfdc34e6958948cb01356f83eb7b3b0501bf5843d831d908faca93e0a9b7b\"" Sep 4 20:28:56.669795 containerd[1468]: time="2024-09-04T20:28:56.669739221Z" level=info msg="StartContainer for \"3cadfdc34e6958948cb01356f83eb7b3b0501bf5843d831d908faca93e0a9b7b\"" Sep 4 20:28:56.734462 systemd[1]: Started cri-containerd-3cadfdc34e6958948cb01356f83eb7b3b0501bf5843d831d908faca93e0a9b7b.scope - libcontainer container 3cadfdc34e6958948cb01356f83eb7b3b0501bf5843d831d908faca93e0a9b7b. Sep 4 20:28:56.780946 containerd[1468]: time="2024-09-04T20:28:56.780872518Z" level=info msg="StartContainer for \"3cadfdc34e6958948cb01356f83eb7b3b0501bf5843d831d908faca93e0a9b7b\" returns successfully" Sep 4 20:28:57.458319 kubelet[2523]: I0904 20:28:57.458252 2523 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5d56685c77-rpmkp" podStartSLOduration=2.1918486 podCreationTimestamp="2024-09-04 20:28:53 +0000 UTC" firstStartedPulling="2024-09-04 20:28:54.373031168 +0000 UTC m=+15.216013244" lastFinishedPulling="2024-09-04 20:28:56.639368837 +0000 UTC m=+17.482350901" observedRunningTime="2024-09-04 20:28:57.457766153 +0000 UTC m=+18.300748239" watchObservedRunningTime="2024-09-04 20:28:57.458186257 +0000 UTC m=+18.301168354" Sep 4 20:29:00.076374 kubelet[2523]: I0904 20:29:00.075677 2523 topology_manager.go:215] "Topology Admit Handler" podUID="8f03d54b-2000-4e9c-9ab1-f1c74ce0e35a" podNamespace="calico-system" podName="calico-typha-b948c989c-c2ffw" Sep 4 20:29:00.092676 systemd[1]: Created slice kubepods-besteffort-pod8f03d54b_2000_4e9c_9ab1_f1c74ce0e35a.slice - libcontainer container kubepods-besteffort-pod8f03d54b_2000_4e9c_9ab1_f1c74ce0e35a.slice. Sep 4 20:29:00.200015 kubelet[2523]: I0904 20:29:00.199954 2523 topology_manager.go:215] "Topology Admit Handler" podUID="e9f3c371-c186-4cce-ade4-37549aa5ab46" podNamespace="calico-system" podName="calico-node-n9trv" Sep 4 20:29:00.214253 systemd[1]: Created slice kubepods-besteffort-pode9f3c371_c186_4cce_ade4_37549aa5ab46.slice - libcontainer container kubepods-besteffort-pode9f3c371_c186_4cce_ade4_37549aa5ab46.slice. Sep 4 20:29:00.237463 kubelet[2523]: I0904 20:29:00.237000 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/8f03d54b-2000-4e9c-9ab1-f1c74ce0e35a-typha-certs\") pod \"calico-typha-b948c989c-c2ffw\" (UID: \"8f03d54b-2000-4e9c-9ab1-f1c74ce0e35a\") " pod="calico-system/calico-typha-b948c989c-c2ffw" Sep 4 20:29:00.237463 kubelet[2523]: I0904 20:29:00.237097 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f03d54b-2000-4e9c-9ab1-f1c74ce0e35a-tigera-ca-bundle\") pod \"calico-typha-b948c989c-c2ffw\" (UID: \"8f03d54b-2000-4e9c-9ab1-f1c74ce0e35a\") " pod="calico-system/calico-typha-b948c989c-c2ffw" Sep 4 20:29:00.237463 kubelet[2523]: I0904 20:29:00.237131 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csdxz\" (UniqueName: \"kubernetes.io/projected/8f03d54b-2000-4e9c-9ab1-f1c74ce0e35a-kube-api-access-csdxz\") pod \"calico-typha-b948c989c-c2ffw\" (UID: \"8f03d54b-2000-4e9c-9ab1-f1c74ce0e35a\") " pod="calico-system/calico-typha-b948c989c-c2ffw" Sep 4 20:29:00.318758 kubelet[2523]: I0904 20:29:00.318705 2523 topology_manager.go:215] "Topology Admit Handler" podUID="19278f8b-d3ea-467e-a88b-64888b0edecc" podNamespace="calico-system" podName="csi-node-driver-wkgcv" Sep 4 20:29:00.319175 kubelet[2523]: E0904 20:29:00.319040 2523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wkgcv" podUID="19278f8b-d3ea-467e-a88b-64888b0edecc" Sep 4 20:29:00.339647 kubelet[2523]: I0904 20:29:00.338873 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e9f3c371-c186-4cce-ade4-37549aa5ab46-cni-log-dir\") pod \"calico-node-n9trv\" (UID: \"e9f3c371-c186-4cce-ade4-37549aa5ab46\") " pod="calico-system/calico-node-n9trv" Sep 4 20:29:00.339647 kubelet[2523]: I0904 20:29:00.339140 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kh8vl\" (UniqueName: \"kubernetes.io/projected/e9f3c371-c186-4cce-ade4-37549aa5ab46-kube-api-access-kh8vl\") pod \"calico-node-n9trv\" (UID: \"e9f3c371-c186-4cce-ade4-37549aa5ab46\") " pod="calico-system/calico-node-n9trv" Sep 4 20:29:00.339647 kubelet[2523]: I0904 20:29:00.339194 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/19278f8b-d3ea-467e-a88b-64888b0edecc-registration-dir\") pod \"csi-node-driver-wkgcv\" (UID: \"19278f8b-d3ea-467e-a88b-64888b0edecc\") " pod="calico-system/csi-node-driver-wkgcv" Sep 4 20:29:00.339647 kubelet[2523]: I0904 20:29:00.339259 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/19278f8b-d3ea-467e-a88b-64888b0edecc-varrun\") pod \"csi-node-driver-wkgcv\" (UID: \"19278f8b-d3ea-467e-a88b-64888b0edecc\") " pod="calico-system/csi-node-driver-wkgcv" Sep 4 20:29:00.339647 kubelet[2523]: I0904 20:29:00.339308 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/19278f8b-d3ea-467e-a88b-64888b0edecc-kubelet-dir\") pod \"csi-node-driver-wkgcv\" (UID: \"19278f8b-d3ea-467e-a88b-64888b0edecc\") " pod="calico-system/csi-node-driver-wkgcv" Sep 4 20:29:00.339909 kubelet[2523]: I0904 20:29:00.339350 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e9f3c371-c186-4cce-ade4-37549aa5ab46-lib-modules\") pod \"calico-node-n9trv\" (UID: \"e9f3c371-c186-4cce-ade4-37549aa5ab46\") " pod="calico-system/calico-node-n9trv" Sep 4 20:29:00.339909 kubelet[2523]: I0904 20:29:00.339389 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e9f3c371-c186-4cce-ade4-37549aa5ab46-var-run-calico\") pod \"calico-node-n9trv\" (UID: \"e9f3c371-c186-4cce-ade4-37549aa5ab46\") " pod="calico-system/calico-node-n9trv" Sep 4 20:29:00.339909 kubelet[2523]: I0904 20:29:00.339412 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e9f3c371-c186-4cce-ade4-37549aa5ab46-policysync\") pod \"calico-node-n9trv\" (UID: \"e9f3c371-c186-4cce-ade4-37549aa5ab46\") " pod="calico-system/calico-node-n9trv" Sep 4 20:29:00.339909 kubelet[2523]: I0904 20:29:00.339435 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9f3c371-c186-4cce-ade4-37549aa5ab46-tigera-ca-bundle\") pod \"calico-node-n9trv\" (UID: \"e9f3c371-c186-4cce-ade4-37549aa5ab46\") " pod="calico-system/calico-node-n9trv" Sep 4 20:29:00.339909 kubelet[2523]: I0904 20:29:00.339454 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e9f3c371-c186-4cce-ade4-37549aa5ab46-node-certs\") pod \"calico-node-n9trv\" (UID: \"e9f3c371-c186-4cce-ade4-37549aa5ab46\") " pod="calico-system/calico-node-n9trv" Sep 4 20:29:00.340119 kubelet[2523]: I0904 20:29:00.339474 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/19278f8b-d3ea-467e-a88b-64888b0edecc-socket-dir\") pod \"csi-node-driver-wkgcv\" (UID: \"19278f8b-d3ea-467e-a88b-64888b0edecc\") " pod="calico-system/csi-node-driver-wkgcv" Sep 4 20:29:00.340119 kubelet[2523]: I0904 20:29:00.339510 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e9f3c371-c186-4cce-ade4-37549aa5ab46-xtables-lock\") pod \"calico-node-n9trv\" (UID: \"e9f3c371-c186-4cce-ade4-37549aa5ab46\") " pod="calico-system/calico-node-n9trv" Sep 4 20:29:00.340119 kubelet[2523]: I0904 20:29:00.339531 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e9f3c371-c186-4cce-ade4-37549aa5ab46-cni-net-dir\") pod \"calico-node-n9trv\" (UID: \"e9f3c371-c186-4cce-ade4-37549aa5ab46\") " pod="calico-system/calico-node-n9trv" Sep 4 20:29:00.340119 kubelet[2523]: I0904 20:29:00.339555 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e9f3c371-c186-4cce-ade4-37549aa5ab46-var-lib-calico\") pod \"calico-node-n9trv\" (UID: \"e9f3c371-c186-4cce-ade4-37549aa5ab46\") " pod="calico-system/calico-node-n9trv" Sep 4 20:29:00.340119 kubelet[2523]: I0904 20:29:00.339606 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76v4k\" (UniqueName: \"kubernetes.io/projected/19278f8b-d3ea-467e-a88b-64888b0edecc-kube-api-access-76v4k\") pod \"csi-node-driver-wkgcv\" (UID: \"19278f8b-d3ea-467e-a88b-64888b0edecc\") " pod="calico-system/csi-node-driver-wkgcv" Sep 4 20:29:00.340313 kubelet[2523]: I0904 20:29:00.339660 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e9f3c371-c186-4cce-ade4-37549aa5ab46-cni-bin-dir\") pod \"calico-node-n9trv\" (UID: \"e9f3c371-c186-4cce-ade4-37549aa5ab46\") " pod="calico-system/calico-node-n9trv" Sep 4 20:29:00.340313 kubelet[2523]: I0904 20:29:00.339682 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e9f3c371-c186-4cce-ade4-37549aa5ab46-flexvol-driver-host\") pod \"calico-node-n9trv\" (UID: \"e9f3c371-c186-4cce-ade4-37549aa5ab46\") " pod="calico-system/calico-node-n9trv" Sep 4 20:29:00.399473 kubelet[2523]: E0904 20:29:00.399421 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 20:29:00.401111 containerd[1468]: time="2024-09-04T20:29:00.400334287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-b948c989c-c2ffw,Uid:8f03d54b-2000-4e9c-9ab1-f1c74ce0e35a,Namespace:calico-system,Attempt:0,}" Sep 4 20:29:00.448324 kubelet[2523]: E0904 20:29:00.447627 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:00.448324 kubelet[2523]: W0904 20:29:00.447667 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:00.448324 kubelet[2523]: E0904 20:29:00.447736 2523 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:00.449566 kubelet[2523]: E0904 20:29:00.449526 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:00.449759 kubelet[2523]: W0904 20:29:00.449545 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:00.450230 kubelet[2523]: E0904 20:29:00.449821 2523 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:00.454379 kubelet[2523]: E0904 20:29:00.454345 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:00.454560 kubelet[2523]: W0904 20:29:00.454527 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:00.457085 kubelet[2523]: E0904 20:29:00.455189 2523 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:00.457320 kubelet[2523]: E0904 20:29:00.457301 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:00.457469 kubelet[2523]: W0904 20:29:00.457379 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:00.457469 kubelet[2523]: E0904 20:29:00.457417 2523 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:00.457739 containerd[1468]: time="2024-09-04T20:29:00.457474622Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 20:29:00.457739 containerd[1468]: time="2024-09-04T20:29:00.457547721Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 20:29:00.459098 containerd[1468]: time="2024-09-04T20:29:00.457571576Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 20:29:00.459098 containerd[1468]: time="2024-09-04T20:29:00.457587107Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 20:29:00.459401 kubelet[2523]: E0904 20:29:00.459382 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:00.459509 kubelet[2523]: W0904 20:29:00.459491 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:00.459588 kubelet[2523]: E0904 20:29:00.459579 2523 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:00.460247 kubelet[2523]: E0904 20:29:00.460226 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:00.460404 kubelet[2523]: W0904 20:29:00.460385 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:00.461052 kubelet[2523]: E0904 20:29:00.461031 2523 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:00.462811 kubelet[2523]: E0904 20:29:00.462795 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:00.462991 kubelet[2523]: W0904 20:29:00.462892 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:00.462991 kubelet[2523]: E0904 20:29:00.462919 2523 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:00.464366 kubelet[2523]: E0904 20:29:00.464160 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:00.464366 kubelet[2523]: W0904 20:29:00.464175 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:00.464366 kubelet[2523]: E0904 20:29:00.464193 2523 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:00.516305 kubelet[2523]: E0904 20:29:00.516252 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:00.517271 kubelet[2523]: W0904 20:29:00.516286 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:00.517271 kubelet[2523]: E0904 20:29:00.516601 2523 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:00.518424 kubelet[2523]: E0904 20:29:00.518297 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:00.518424 kubelet[2523]: W0904 20:29:00.518416 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:00.518595 kubelet[2523]: E0904 20:29:00.518441 2523 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:00.519416 systemd[1]: Started cri-containerd-a50fa2171104691e0092058c4c0d2edf6be5fdc2a6d730655585eb4de38ab620.scope - libcontainer container a50fa2171104691e0092058c4c0d2edf6be5fdc2a6d730655585eb4de38ab620. Sep 4 20:29:00.523960 kubelet[2523]: E0904 20:29:00.523258 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 20:29:00.527105 containerd[1468]: time="2024-09-04T20:29:00.525718353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-n9trv,Uid:e9f3c371-c186-4cce-ade4-37549aa5ab46,Namespace:calico-system,Attempt:0,}" Sep 4 20:29:00.590874 containerd[1468]: time="2024-09-04T20:29:00.589664783Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 20:29:00.590874 containerd[1468]: time="2024-09-04T20:29:00.590023164Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 20:29:00.590874 containerd[1468]: time="2024-09-04T20:29:00.590119633Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 20:29:00.590874 containerd[1468]: time="2024-09-04T20:29:00.590152468Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 20:29:00.629398 systemd[1]: Started cri-containerd-7432b0e4682befc7e1ecd8b39cde399f994b33f91585620433433622071fc475.scope - libcontainer container 7432b0e4682befc7e1ecd8b39cde399f994b33f91585620433433622071fc475. Sep 4 20:29:00.710149 containerd[1468]: time="2024-09-04T20:29:00.708997493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-b948c989c-c2ffw,Uid:8f03d54b-2000-4e9c-9ab1-f1c74ce0e35a,Namespace:calico-system,Attempt:0,} returns sandbox id \"a50fa2171104691e0092058c4c0d2edf6be5fdc2a6d730655585eb4de38ab620\"" Sep 4 20:29:00.717128 kubelet[2523]: E0904 20:29:00.716257 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 20:29:00.720464 containerd[1468]: time="2024-09-04T20:29:00.720410894Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\"" Sep 4 20:29:00.725204 containerd[1468]: time="2024-09-04T20:29:00.724887276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-n9trv,Uid:e9f3c371-c186-4cce-ade4-37549aa5ab46,Namespace:calico-system,Attempt:0,} returns sandbox id \"7432b0e4682befc7e1ecd8b39cde399f994b33f91585620433433622071fc475\"" Sep 4 20:29:00.728823 kubelet[2523]: E0904 20:29:00.728716 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 20:29:01.373163 systemd[1]: run-containerd-runc-k8s.io-a50fa2171104691e0092058c4c0d2edf6be5fdc2a6d730655585eb4de38ab620-runc.jrV6vR.mount: Deactivated successfully. Sep 4 20:29:02.400007 kubelet[2523]: E0904 20:29:02.399945 2523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wkgcv" podUID="19278f8b-d3ea-467e-a88b-64888b0edecc" Sep 4 20:29:03.398920 containerd[1468]: time="2024-09-04T20:29:03.398693791Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:29:03.402772 containerd[1468]: time="2024-09-04T20:29:03.402423381Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.1: active requests=0, bytes read=29471335" Sep 4 20:29:03.403839 containerd[1468]: time="2024-09-04T20:29:03.403710740Z" level=info msg="ImageCreate event name:\"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:29:03.409541 containerd[1468]: time="2024-09-04T20:29:03.409486320Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:29:03.415842 containerd[1468]: time="2024-09-04T20:29:03.415771969Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.1\" with image id \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\", size \"30963728\" in 2.695312986s" Sep 4 20:29:03.416173 containerd[1468]: time="2024-09-04T20:29:03.416019448Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\" returns image reference \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\"" Sep 4 20:29:03.424106 containerd[1468]: time="2024-09-04T20:29:03.423928392Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\"" Sep 4 20:29:03.441924 containerd[1468]: time="2024-09-04T20:29:03.441599430Z" level=info msg="CreateContainer within sandbox \"a50fa2171104691e0092058c4c0d2edf6be5fdc2a6d730655585eb4de38ab620\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 4 20:29:03.493151 containerd[1468]: time="2024-09-04T20:29:03.493006649Z" level=info msg="CreateContainer within sandbox \"a50fa2171104691e0092058c4c0d2edf6be5fdc2a6d730655585eb4de38ab620\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"26e2dc068e778749f8fb62ec47fcd8b93803b75894d5ddfe03734fc485f9642f\"" Sep 4 20:29:03.495643 containerd[1468]: time="2024-09-04T20:29:03.495578024Z" level=info msg="StartContainer for \"26e2dc068e778749f8fb62ec47fcd8b93803b75894d5ddfe03734fc485f9642f\"" Sep 4 20:29:03.567444 systemd[1]: Started cri-containerd-26e2dc068e778749f8fb62ec47fcd8b93803b75894d5ddfe03734fc485f9642f.scope - libcontainer container 26e2dc068e778749f8fb62ec47fcd8b93803b75894d5ddfe03734fc485f9642f. Sep 4 20:29:03.729898 containerd[1468]: time="2024-09-04T20:29:03.729815422Z" level=info msg="StartContainer for \"26e2dc068e778749f8fb62ec47fcd8b93803b75894d5ddfe03734fc485f9642f\" returns successfully" Sep 4 20:29:04.357679 kubelet[2523]: E0904 20:29:04.357334 2523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wkgcv" podUID="19278f8b-d3ea-467e-a88b-64888b0edecc" Sep 4 20:29:04.492633 kubelet[2523]: E0904 20:29:04.490650 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 20:29:04.512859 kubelet[2523]: I0904 20:29:04.512473 2523 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-b948c989c-c2ffw" podStartSLOduration=1.812628531 podCreationTimestamp="2024-09-04 20:29:00 +0000 UTC" firstStartedPulling="2024-09-04 20:29:00.718794328 +0000 UTC m=+21.561776407" lastFinishedPulling="2024-09-04 20:29:03.418589386 +0000 UTC m=+24.261571446" observedRunningTime="2024-09-04 20:29:04.510927825 +0000 UTC m=+25.353909910" watchObservedRunningTime="2024-09-04 20:29:04.51242357 +0000 UTC m=+25.355405670" Sep 4 20:29:04.585112 kubelet[2523]: E0904 20:29:04.585005 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:04.585112 kubelet[2523]: W0904 20:29:04.585115 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:04.585367 kubelet[2523]: E0904 20:29:04.585154 2523 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:04.586237 kubelet[2523]: E0904 20:29:04.586205 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:04.586237 kubelet[2523]: W0904 20:29:04.586236 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:04.586505 kubelet[2523]: E0904 20:29:04.586285 2523 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:04.586683 kubelet[2523]: E0904 20:29:04.586663 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:04.586683 kubelet[2523]: W0904 20:29:04.586680 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:04.586858 kubelet[2523]: E0904 20:29:04.586700 2523 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:04.586956 kubelet[2523]: E0904 20:29:04.586938 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:04.587019 kubelet[2523]: W0904 20:29:04.586950 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:04.587742 kubelet[2523]: E0904 20:29:04.587713 2523 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:04.588035 kubelet[2523]: E0904 20:29:04.588015 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:04.588035 kubelet[2523]: W0904 20:29:04.588034 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:04.588236 kubelet[2523]: E0904 20:29:04.588080 2523 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:04.588333 kubelet[2523]: E0904 20:29:04.588311 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:04.588333 kubelet[2523]: W0904 20:29:04.588322 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:04.588498 kubelet[2523]: E0904 20:29:04.588339 2523 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:04.588974 kubelet[2523]: E0904 20:29:04.588952 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:04.588974 kubelet[2523]: W0904 20:29:04.588971 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:04.589393 kubelet[2523]: E0904 20:29:04.588992 2523 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:04.589393 kubelet[2523]: E0904 20:29:04.589363 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:04.589393 kubelet[2523]: W0904 20:29:04.589376 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:04.589393 kubelet[2523]: E0904 20:29:04.589394 2523 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:04.590582 kubelet[2523]: E0904 20:29:04.590269 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:04.590582 kubelet[2523]: W0904 20:29:04.590291 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:04.590582 kubelet[2523]: E0904 20:29:04.590314 2523 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:04.591041 kubelet[2523]: E0904 20:29:04.590883 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:04.591041 kubelet[2523]: W0904 20:29:04.590900 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:04.591041 kubelet[2523]: E0904 20:29:04.590921 2523 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:04.591309 kubelet[2523]: E0904 20:29:04.591294 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:04.591461 kubelet[2523]: W0904 20:29:04.591380 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:04.591461 kubelet[2523]: E0904 20:29:04.591404 2523 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:04.591936 kubelet[2523]: E0904 20:29:04.591812 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:04.591936 kubelet[2523]: W0904 20:29:04.591826 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:04.591936 kubelet[2523]: E0904 20:29:04.591845 2523 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:04.592624 kubelet[2523]: E0904 20:29:04.592467 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:04.592624 kubelet[2523]: W0904 20:29:04.592483 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:04.592624 kubelet[2523]: E0904 20:29:04.592503 2523 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:04.593676 kubelet[2523]: E0904 20:29:04.593563 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:04.593676 kubelet[2523]: W0904 20:29:04.593582 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:04.593676 kubelet[2523]: E0904 20:29:04.593608 2523 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:04.594241 kubelet[2523]: E0904 20:29:04.594140 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:04.594241 kubelet[2523]: W0904 20:29:04.594156 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:04.594241 kubelet[2523]: E0904 20:29:04.594174 2523 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:04.680832 kubelet[2523]: E0904 20:29:04.680413 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:04.680832 kubelet[2523]: W0904 20:29:04.680468 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:04.680832 kubelet[2523]: E0904 20:29:04.680495 2523 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:04.684152 kubelet[2523]: E0904 20:29:04.683434 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:04.684152 kubelet[2523]: W0904 20:29:04.683494 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:04.684152 kubelet[2523]: E0904 20:29:04.684109 2523 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:04.684963 kubelet[2523]: E0904 20:29:04.684548 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:04.684963 kubelet[2523]: W0904 20:29:04.684630 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:04.684963 kubelet[2523]: E0904 20:29:04.684651 2523 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:04.685670 kubelet[2523]: E0904 20:29:04.685541 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:04.685670 kubelet[2523]: W0904 20:29:04.685554 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:04.685670 kubelet[2523]: E0904 20:29:04.685574 2523 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:04.687821 kubelet[2523]: E0904 20:29:04.687479 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:04.687821 kubelet[2523]: W0904 20:29:04.687496 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:04.687821 kubelet[2523]: E0904 20:29:04.687652 2523 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:04.688262 kubelet[2523]: E0904 20:29:04.688188 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:04.688262 kubelet[2523]: W0904 20:29:04.688200 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:04.688522 kubelet[2523]: E0904 20:29:04.688353 2523 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:04.689001 kubelet[2523]: E0904 20:29:04.688985 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:04.689190 kubelet[2523]: W0904 20:29:04.689096 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:04.689466 kubelet[2523]: E0904 20:29:04.689357 2523 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:04.689809 kubelet[2523]: E0904 20:29:04.689779 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:04.689809 kubelet[2523]: W0904 20:29:04.689794 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:04.691736 kubelet[2523]: E0904 20:29:04.691137 2523 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:04.692443 kubelet[2523]: E0904 20:29:04.692425 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:04.692555 kubelet[2523]: W0904 20:29:04.692542 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:04.693083 kubelet[2523]: E0904 20:29:04.692835 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:04.693083 kubelet[2523]: W0904 20:29:04.692847 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:04.694084 kubelet[2523]: E0904 20:29:04.693528 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:04.694084 kubelet[2523]: W0904 20:29:04.693580 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:04.694084 kubelet[2523]: E0904 20:29:04.693603 2523 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:04.694084 kubelet[2523]: E0904 20:29:04.693642 2523 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:04.694541 kubelet[2523]: E0904 20:29:04.694506 2523 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:04.695535 kubelet[2523]: E0904 20:29:04.695439 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:04.695795 kubelet[2523]: W0904 20:29:04.695695 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:04.696472 kubelet[2523]: E0904 20:29:04.696375 2523 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:04.697574 kubelet[2523]: E0904 20:29:04.697393 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:04.698199 kubelet[2523]: W0904 20:29:04.697831 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:04.698481 kubelet[2523]: E0904 20:29:04.698392 2523 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:04.699866 kubelet[2523]: E0904 20:29:04.698943 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:04.699866 kubelet[2523]: W0904 20:29:04.698957 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:04.699866 kubelet[2523]: E0904 20:29:04.698976 2523 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:04.700922 kubelet[2523]: E0904 20:29:04.700900 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:04.701103 kubelet[2523]: W0904 20:29:04.701088 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:04.701667 kubelet[2523]: E0904 20:29:04.701547 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:04.701869 kubelet[2523]: W0904 20:29:04.701849 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:04.701964 kubelet[2523]: E0904 20:29:04.701951 2523 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:04.702814 kubelet[2523]: E0904 20:29:04.702734 2523 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:04.703998 kubelet[2523]: E0904 20:29:04.703330 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:04.703998 kubelet[2523]: W0904 20:29:04.703347 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:04.703998 kubelet[2523]: E0904 20:29:04.703364 2523 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:04.712445 kubelet[2523]: E0904 20:29:04.711351 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:04.712445 kubelet[2523]: W0904 20:29:04.711381 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:04.712445 kubelet[2523]: E0904 20:29:04.711428 2523 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:04.904841 containerd[1468]: time="2024-09-04T20:29:04.904761632Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:29:04.907224 containerd[1468]: time="2024-09-04T20:29:04.907133696Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1: active requests=0, bytes read=5141007" Sep 4 20:29:04.908104 containerd[1468]: time="2024-09-04T20:29:04.907987712Z" level=info msg="ImageCreate event name:\"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:29:04.911458 containerd[1468]: time="2024-09-04T20:29:04.911386625Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:29:04.912859 containerd[1468]: time="2024-09-04T20:29:04.912687860Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" with image id \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\", size \"6633368\" in 1.488704311s" Sep 4 20:29:04.912859 containerd[1468]: time="2024-09-04T20:29:04.912749781Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" returns image reference \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\"" Sep 4 20:29:04.918276 containerd[1468]: time="2024-09-04T20:29:04.918178145Z" level=info msg="CreateContainer within sandbox \"7432b0e4682befc7e1ecd8b39cde399f994b33f91585620433433622071fc475\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 4 20:29:04.948379 containerd[1468]: time="2024-09-04T20:29:04.948105639Z" level=info msg="CreateContainer within sandbox \"7432b0e4682befc7e1ecd8b39cde399f994b33f91585620433433622071fc475\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"adc93be8bed5c7ac2b27f63d283493c7d69b5b73f1facace64694c0622623627\"" Sep 4 20:29:04.948679 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2241764352.mount: Deactivated successfully. Sep 4 20:29:04.956110 containerd[1468]: time="2024-09-04T20:29:04.953120535Z" level=info msg="StartContainer for \"adc93be8bed5c7ac2b27f63d283493c7d69b5b73f1facace64694c0622623627\"" Sep 4 20:29:05.024356 systemd[1]: Started cri-containerd-adc93be8bed5c7ac2b27f63d283493c7d69b5b73f1facace64694c0622623627.scope - libcontainer container adc93be8bed5c7ac2b27f63d283493c7d69b5b73f1facace64694c0622623627. Sep 4 20:29:05.068404 containerd[1468]: time="2024-09-04T20:29:05.068351066Z" level=info msg="StartContainer for \"adc93be8bed5c7ac2b27f63d283493c7d69b5b73f1facace64694c0622623627\" returns successfully" Sep 4 20:29:05.097228 systemd[1]: cri-containerd-adc93be8bed5c7ac2b27f63d283493c7d69b5b73f1facace64694c0622623627.scope: Deactivated successfully. Sep 4 20:29:05.137579 containerd[1468]: time="2024-09-04T20:29:05.137472593Z" level=info msg="shim disconnected" id=adc93be8bed5c7ac2b27f63d283493c7d69b5b73f1facace64694c0622623627 namespace=k8s.io Sep 4 20:29:05.138343 containerd[1468]: time="2024-09-04T20:29:05.138013222Z" level=warning msg="cleaning up after shim disconnected" id=adc93be8bed5c7ac2b27f63d283493c7d69b5b73f1facace64694c0622623627 namespace=k8s.io Sep 4 20:29:05.138343 containerd[1468]: time="2024-09-04T20:29:05.138046663Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 20:29:05.433279 systemd[1]: run-containerd-runc-k8s.io-adc93be8bed5c7ac2b27f63d283493c7d69b5b73f1facace64694c0622623627-runc.quOqyT.mount: Deactivated successfully. Sep 4 20:29:05.433442 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-adc93be8bed5c7ac2b27f63d283493c7d69b5b73f1facace64694c0622623627-rootfs.mount: Deactivated successfully. Sep 4 20:29:05.494816 kubelet[2523]: E0904 20:29:05.494770 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 20:29:05.498323 containerd[1468]: time="2024-09-04T20:29:05.497565256Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\"" Sep 4 20:29:05.498878 kubelet[2523]: I0904 20:29:05.498840 2523 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 20:29:05.500353 kubelet[2523]: E0904 20:29:05.500147 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 20:29:06.356839 kubelet[2523]: E0904 20:29:06.356724 2523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wkgcv" podUID="19278f8b-d3ea-467e-a88b-64888b0edecc" Sep 4 20:29:08.356553 kubelet[2523]: E0904 20:29:08.356436 2523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wkgcv" podUID="19278f8b-d3ea-467e-a88b-64888b0edecc" Sep 4 20:29:09.785901 containerd[1468]: time="2024-09-04T20:29:09.785813008Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:29:09.787992 containerd[1468]: time="2024-09-04T20:29:09.787889031Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.1: active requests=0, bytes read=93083736" Sep 4 20:29:09.790048 containerd[1468]: time="2024-09-04T20:29:09.789782860Z" level=info msg="ImageCreate event name:\"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:29:09.793856 containerd[1468]: time="2024-09-04T20:29:09.793797941Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:29:09.796137 containerd[1468]: time="2024-09-04T20:29:09.795626663Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.1\" with image id \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\", size \"94576137\" in 4.29801258s" Sep 4 20:29:09.796137 containerd[1468]: time="2024-09-04T20:29:09.795726027Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\" returns image reference \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\"" Sep 4 20:29:09.798985 containerd[1468]: time="2024-09-04T20:29:09.798830084Z" level=info msg="CreateContainer within sandbox \"7432b0e4682befc7e1ecd8b39cde399f994b33f91585620433433622071fc475\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 4 20:29:09.850848 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount480816827.mount: Deactivated successfully. Sep 4 20:29:09.888871 containerd[1468]: time="2024-09-04T20:29:09.874201663Z" level=info msg="CreateContainer within sandbox \"7432b0e4682befc7e1ecd8b39cde399f994b33f91585620433433622071fc475\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"3a7ca688a52696d3432fc41db8e2f1cf1b7ec768403aa330561b43cf4000d339\"" Sep 4 20:29:09.891364 containerd[1468]: time="2024-09-04T20:29:09.889379477Z" level=info msg="StartContainer for \"3a7ca688a52696d3432fc41db8e2f1cf1b7ec768403aa330561b43cf4000d339\"" Sep 4 20:29:10.056305 systemd[1]: Started cri-containerd-3a7ca688a52696d3432fc41db8e2f1cf1b7ec768403aa330561b43cf4000d339.scope - libcontainer container 3a7ca688a52696d3432fc41db8e2f1cf1b7ec768403aa330561b43cf4000d339. Sep 4 20:29:10.133338 containerd[1468]: time="2024-09-04T20:29:10.133250103Z" level=info msg="StartContainer for \"3a7ca688a52696d3432fc41db8e2f1cf1b7ec768403aa330561b43cf4000d339\" returns successfully" Sep 4 20:29:10.228350 kubelet[2523]: I0904 20:29:10.228282 2523 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 20:29:10.233370 kubelet[2523]: E0904 20:29:10.231814 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 20:29:10.358668 kubelet[2523]: E0904 20:29:10.357216 2523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wkgcv" podUID="19278f8b-d3ea-467e-a88b-64888b0edecc" Sep 4 20:29:10.521097 kubelet[2523]: E0904 20:29:10.520565 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 20:29:10.521097 kubelet[2523]: E0904 20:29:10.520699 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 20:29:10.788748 systemd[1]: cri-containerd-3a7ca688a52696d3432fc41db8e2f1cf1b7ec768403aa330561b43cf4000d339.scope: Deactivated successfully. Sep 4 20:29:10.831920 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3a7ca688a52696d3432fc41db8e2f1cf1b7ec768403aa330561b43cf4000d339-rootfs.mount: Deactivated successfully. Sep 4 20:29:10.835950 containerd[1468]: time="2024-09-04T20:29:10.835870474Z" level=info msg="shim disconnected" id=3a7ca688a52696d3432fc41db8e2f1cf1b7ec768403aa330561b43cf4000d339 namespace=k8s.io Sep 4 20:29:10.835950 containerd[1468]: time="2024-09-04T20:29:10.835943089Z" level=warning msg="cleaning up after shim disconnected" id=3a7ca688a52696d3432fc41db8e2f1cf1b7ec768403aa330561b43cf4000d339 namespace=k8s.io Sep 4 20:29:10.835950 containerd[1468]: time="2024-09-04T20:29:10.835957575Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 20:29:10.864955 kubelet[2523]: I0904 20:29:10.864701 2523 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Sep 4 20:29:10.899144 kubelet[2523]: I0904 20:29:10.897021 2523 topology_manager.go:215] "Topology Admit Handler" podUID="f15b71f8-033b-4890-a1b4-2a97c47ca461" podNamespace="kube-system" podName="coredns-5dd5756b68-966rt" Sep 4 20:29:10.906930 kubelet[2523]: I0904 20:29:10.905003 2523 topology_manager.go:215] "Topology Admit Handler" podUID="e4b0c9ae-e6bb-4436-8d94-867c1964daae" podNamespace="calico-system" podName="calico-kube-controllers-d85dc74c8-bfxxg" Sep 4 20:29:10.910132 kubelet[2523]: I0904 20:29:10.910094 2523 topology_manager.go:215] "Topology Admit Handler" podUID="f347db77-68c1-4005-a444-424ceab37966" podNamespace="kube-system" podName="coredns-5dd5756b68-fk4tb" Sep 4 20:29:10.917279 systemd[1]: Created slice kubepods-burstable-podf15b71f8_033b_4890_a1b4_2a97c47ca461.slice - libcontainer container kubepods-burstable-podf15b71f8_033b_4890_a1b4_2a97c47ca461.slice. Sep 4 20:29:10.929247 systemd[1]: Created slice kubepods-burstable-podf347db77_68c1_4005_a444_424ceab37966.slice - libcontainer container kubepods-burstable-podf347db77_68c1_4005_a444_424ceab37966.slice. Sep 4 20:29:10.949671 systemd[1]: Created slice kubepods-besteffort-pode4b0c9ae_e6bb_4436_8d94_867c1964daae.slice - libcontainer container kubepods-besteffort-pode4b0c9ae_e6bb_4436_8d94_867c1964daae.slice. Sep 4 20:29:11.042535 kubelet[2523]: I0904 20:29:11.042184 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f347db77-68c1-4005-a444-424ceab37966-config-volume\") pod \"coredns-5dd5756b68-fk4tb\" (UID: \"f347db77-68c1-4005-a444-424ceab37966\") " pod="kube-system/coredns-5dd5756b68-fk4tb" Sep 4 20:29:11.042535 kubelet[2523]: I0904 20:29:11.042278 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dv5s4\" (UniqueName: \"kubernetes.io/projected/f15b71f8-033b-4890-a1b4-2a97c47ca461-kube-api-access-dv5s4\") pod \"coredns-5dd5756b68-966rt\" (UID: \"f15b71f8-033b-4890-a1b4-2a97c47ca461\") " pod="kube-system/coredns-5dd5756b68-966rt" Sep 4 20:29:11.042535 kubelet[2523]: I0904 20:29:11.042304 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfcv8\" (UniqueName: \"kubernetes.io/projected/f347db77-68c1-4005-a444-424ceab37966-kube-api-access-kfcv8\") pod \"coredns-5dd5756b68-fk4tb\" (UID: \"f347db77-68c1-4005-a444-424ceab37966\") " pod="kube-system/coredns-5dd5756b68-fk4tb" Sep 4 20:29:11.042535 kubelet[2523]: I0904 20:29:11.042333 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66qqr\" (UniqueName: \"kubernetes.io/projected/e4b0c9ae-e6bb-4436-8d94-867c1964daae-kube-api-access-66qqr\") pod \"calico-kube-controllers-d85dc74c8-bfxxg\" (UID: \"e4b0c9ae-e6bb-4436-8d94-867c1964daae\") " pod="calico-system/calico-kube-controllers-d85dc74c8-bfxxg" Sep 4 20:29:11.042535 kubelet[2523]: I0904 20:29:11.042441 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f15b71f8-033b-4890-a1b4-2a97c47ca461-config-volume\") pod \"coredns-5dd5756b68-966rt\" (UID: \"f15b71f8-033b-4890-a1b4-2a97c47ca461\") " pod="kube-system/coredns-5dd5756b68-966rt" Sep 4 20:29:11.043221 kubelet[2523]: I0904 20:29:11.042507 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e4b0c9ae-e6bb-4436-8d94-867c1964daae-tigera-ca-bundle\") pod \"calico-kube-controllers-d85dc74c8-bfxxg\" (UID: \"e4b0c9ae-e6bb-4436-8d94-867c1964daae\") " pod="calico-system/calico-kube-controllers-d85dc74c8-bfxxg" Sep 4 20:29:11.225811 kubelet[2523]: E0904 20:29:11.225767 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 20:29:11.227597 containerd[1468]: time="2024-09-04T20:29:11.227257214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-966rt,Uid:f15b71f8-033b-4890-a1b4-2a97c47ca461,Namespace:kube-system,Attempt:0,}" Sep 4 20:29:11.242640 kubelet[2523]: E0904 20:29:11.242588 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 20:29:11.245832 containerd[1468]: time="2024-09-04T20:29:11.245766488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-fk4tb,Uid:f347db77-68c1-4005-a444-424ceab37966,Namespace:kube-system,Attempt:0,}" Sep 4 20:29:11.258115 containerd[1468]: time="2024-09-04T20:29:11.257301236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d85dc74c8-bfxxg,Uid:e4b0c9ae-e6bb-4436-8d94-867c1964daae,Namespace:calico-system,Attempt:0,}" Sep 4 20:29:11.477641 containerd[1468]: time="2024-09-04T20:29:11.477527707Z" level=error msg="Failed to destroy network for sandbox \"325efa9ce30fd3869d142cc34879fb6ef9d38b4f2e106555f839fca57f5b4a31\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 20:29:11.479970 containerd[1468]: time="2024-09-04T20:29:11.479900057Z" level=error msg="Failed to destroy network for sandbox \"28b8d38c9da2f890fe1c6b71d6e31746a3708691e1e16dd95d179008d2b85049\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 20:29:11.484163 containerd[1468]: time="2024-09-04T20:29:11.484038367Z" level=error msg="encountered an error cleaning up failed sandbox \"325efa9ce30fd3869d142cc34879fb6ef9d38b4f2e106555f839fca57f5b4a31\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 20:29:11.484325 containerd[1468]: time="2024-09-04T20:29:11.484206897Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-966rt,Uid:f15b71f8-033b-4890-a1b4-2a97c47ca461,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"325efa9ce30fd3869d142cc34879fb6ef9d38b4f2e106555f839fca57f5b4a31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 20:29:11.484325 containerd[1468]: time="2024-09-04T20:29:11.484038617Z" level=error msg="encountered an error cleaning up failed sandbox \"28b8d38c9da2f890fe1c6b71d6e31746a3708691e1e16dd95d179008d2b85049\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 20:29:11.484390 containerd[1468]: time="2024-09-04T20:29:11.484366888Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d85dc74c8-bfxxg,Uid:e4b0c9ae-e6bb-4436-8d94-867c1964daae,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"28b8d38c9da2f890fe1c6b71d6e31746a3708691e1e16dd95d179008d2b85049\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 20:29:11.485037 kubelet[2523]: E0904 20:29:11.484745 2523 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28b8d38c9da2f890fe1c6b71d6e31746a3708691e1e16dd95d179008d2b85049\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 20:29:11.485037 kubelet[2523]: E0904 20:29:11.484829 2523 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28b8d38c9da2f890fe1c6b71d6e31746a3708691e1e16dd95d179008d2b85049\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-d85dc74c8-bfxxg" Sep 4 20:29:11.485037 kubelet[2523]: E0904 20:29:11.484828 2523 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"325efa9ce30fd3869d142cc34879fb6ef9d38b4f2e106555f839fca57f5b4a31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 20:29:11.485037 kubelet[2523]: E0904 20:29:11.484852 2523 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28b8d38c9da2f890fe1c6b71d6e31746a3708691e1e16dd95d179008d2b85049\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-d85dc74c8-bfxxg" Sep 4 20:29:11.487463 kubelet[2523]: E0904 20:29:11.484881 2523 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"325efa9ce30fd3869d142cc34879fb6ef9d38b4f2e106555f839fca57f5b4a31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-966rt" Sep 4 20:29:11.487463 kubelet[2523]: E0904 20:29:11.484910 2523 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"325efa9ce30fd3869d142cc34879fb6ef9d38b4f2e106555f839fca57f5b4a31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-966rt" Sep 4 20:29:11.487463 kubelet[2523]: E0904 20:29:11.484948 2523 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-d85dc74c8-bfxxg_calico-system(e4b0c9ae-e6bb-4436-8d94-867c1964daae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-d85dc74c8-bfxxg_calico-system(e4b0c9ae-e6bb-4436-8d94-867c1964daae)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"28b8d38c9da2f890fe1c6b71d6e31746a3708691e1e16dd95d179008d2b85049\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-d85dc74c8-bfxxg" podUID="e4b0c9ae-e6bb-4436-8d94-867c1964daae" Sep 4 20:29:11.487603 kubelet[2523]: E0904 20:29:11.484984 2523 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-966rt_kube-system(f15b71f8-033b-4890-a1b4-2a97c47ca461)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-966rt_kube-system(f15b71f8-033b-4890-a1b4-2a97c47ca461)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"325efa9ce30fd3869d142cc34879fb6ef9d38b4f2e106555f839fca57f5b4a31\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-966rt" podUID="f15b71f8-033b-4890-a1b4-2a97c47ca461" Sep 4 20:29:11.492328 containerd[1468]: time="2024-09-04T20:29:11.492262355Z" level=error msg="Failed to destroy network for sandbox \"c20c5b63e11f42c5f851e1860eff3f0d0e4e32aa5c6cd2761dd97b96c0be2b58\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 20:29:11.492716 containerd[1468]: time="2024-09-04T20:29:11.492685524Z" level=error msg="encountered an error cleaning up failed sandbox \"c20c5b63e11f42c5f851e1860eff3f0d0e4e32aa5c6cd2761dd97b96c0be2b58\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 20:29:11.492768 containerd[1468]: time="2024-09-04T20:29:11.492742229Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-fk4tb,Uid:f347db77-68c1-4005-a444-424ceab37966,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c20c5b63e11f42c5f851e1860eff3f0d0e4e32aa5c6cd2761dd97b96c0be2b58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 20:29:11.493019 kubelet[2523]: E0904 20:29:11.492995 2523 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c20c5b63e11f42c5f851e1860eff3f0d0e4e32aa5c6cd2761dd97b96c0be2b58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 20:29:11.493319 kubelet[2523]: E0904 20:29:11.493198 2523 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c20c5b63e11f42c5f851e1860eff3f0d0e4e32aa5c6cd2761dd97b96c0be2b58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-fk4tb" Sep 4 20:29:11.493319 kubelet[2523]: E0904 20:29:11.493231 2523 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c20c5b63e11f42c5f851e1860eff3f0d0e4e32aa5c6cd2761dd97b96c0be2b58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-fk4tb" Sep 4 20:29:11.493319 kubelet[2523]: E0904 20:29:11.493293 2523 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-fk4tb_kube-system(f347db77-68c1-4005-a444-424ceab37966)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-fk4tb_kube-system(f347db77-68c1-4005-a444-424ceab37966)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c20c5b63e11f42c5f851e1860eff3f0d0e4e32aa5c6cd2761dd97b96c0be2b58\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-fk4tb" podUID="f347db77-68c1-4005-a444-424ceab37966" Sep 4 20:29:11.522977 kubelet[2523]: I0904 20:29:11.522929 2523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="325efa9ce30fd3869d142cc34879fb6ef9d38b4f2e106555f839fca57f5b4a31" Sep 4 20:29:11.524496 kubelet[2523]: I0904 20:29:11.524142 2523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28b8d38c9da2f890fe1c6b71d6e31746a3708691e1e16dd95d179008d2b85049" Sep 4 20:29:11.528693 containerd[1468]: time="2024-09-04T20:29:11.528393118Z" level=info msg="StopPodSandbox for \"325efa9ce30fd3869d142cc34879fb6ef9d38b4f2e106555f839fca57f5b4a31\"" Sep 4 20:29:11.530632 containerd[1468]: time="2024-09-04T20:29:11.530373936Z" level=info msg="StopPodSandbox for \"28b8d38c9da2f890fe1c6b71d6e31746a3708691e1e16dd95d179008d2b85049\"" Sep 4 20:29:11.534972 containerd[1468]: time="2024-09-04T20:29:11.534762262Z" level=info msg="Ensure that sandbox 325efa9ce30fd3869d142cc34879fb6ef9d38b4f2e106555f839fca57f5b4a31 in task-service has been cleanup successfully" Sep 4 20:29:11.535694 containerd[1468]: time="2024-09-04T20:29:11.535297742Z" level=info msg="Ensure that sandbox 28b8d38c9da2f890fe1c6b71d6e31746a3708691e1e16dd95d179008d2b85049 in task-service has been cleanup successfully" Sep 4 20:29:11.539222 kubelet[2523]: I0904 20:29:11.539188 2523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c20c5b63e11f42c5f851e1860eff3f0d0e4e32aa5c6cd2761dd97b96c0be2b58" Sep 4 20:29:11.541273 containerd[1468]: time="2024-09-04T20:29:11.541214450Z" level=info msg="StopPodSandbox for \"c20c5b63e11f42c5f851e1860eff3f0d0e4e32aa5c6cd2761dd97b96c0be2b58\"" Sep 4 20:29:11.552373 containerd[1468]: time="2024-09-04T20:29:11.551990356Z" level=info msg="Ensure that sandbox c20c5b63e11f42c5f851e1860eff3f0d0e4e32aa5c6cd2761dd97b96c0be2b58 in task-service has been cleanup successfully" Sep 4 20:29:11.555803 kubelet[2523]: E0904 20:29:11.555758 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 20:29:11.557536 containerd[1468]: time="2024-09-04T20:29:11.557488113Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\"" Sep 4 20:29:11.632477 containerd[1468]: time="2024-09-04T20:29:11.631333519Z" level=error msg="StopPodSandbox for \"28b8d38c9da2f890fe1c6b71d6e31746a3708691e1e16dd95d179008d2b85049\" failed" error="failed to destroy network for sandbox \"28b8d38c9da2f890fe1c6b71d6e31746a3708691e1e16dd95d179008d2b85049\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 20:29:11.632712 kubelet[2523]: E0904 20:29:11.631759 2523 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"28b8d38c9da2f890fe1c6b71d6e31746a3708691e1e16dd95d179008d2b85049\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="28b8d38c9da2f890fe1c6b71d6e31746a3708691e1e16dd95d179008d2b85049" Sep 4 20:29:11.632712 kubelet[2523]: E0904 20:29:11.631887 2523 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"28b8d38c9da2f890fe1c6b71d6e31746a3708691e1e16dd95d179008d2b85049"} Sep 4 20:29:11.632712 kubelet[2523]: E0904 20:29:11.631954 2523 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e4b0c9ae-e6bb-4436-8d94-867c1964daae\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"28b8d38c9da2f890fe1c6b71d6e31746a3708691e1e16dd95d179008d2b85049\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 20:29:11.632712 kubelet[2523]: E0904 20:29:11.632009 2523 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e4b0c9ae-e6bb-4436-8d94-867c1964daae\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"28b8d38c9da2f890fe1c6b71d6e31746a3708691e1e16dd95d179008d2b85049\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-d85dc74c8-bfxxg" podUID="e4b0c9ae-e6bb-4436-8d94-867c1964daae" Sep 4 20:29:11.635499 containerd[1468]: time="2024-09-04T20:29:11.634613577Z" level=error msg="StopPodSandbox for \"325efa9ce30fd3869d142cc34879fb6ef9d38b4f2e106555f839fca57f5b4a31\" failed" error="failed to destroy network for sandbox \"325efa9ce30fd3869d142cc34879fb6ef9d38b4f2e106555f839fca57f5b4a31\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 20:29:11.635690 kubelet[2523]: E0904 20:29:11.635172 2523 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"325efa9ce30fd3869d142cc34879fb6ef9d38b4f2e106555f839fca57f5b4a31\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="325efa9ce30fd3869d142cc34879fb6ef9d38b4f2e106555f839fca57f5b4a31" Sep 4 20:29:11.635690 kubelet[2523]: E0904 20:29:11.635222 2523 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"325efa9ce30fd3869d142cc34879fb6ef9d38b4f2e106555f839fca57f5b4a31"} Sep 4 20:29:11.636487 kubelet[2523]: E0904 20:29:11.636164 2523 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f15b71f8-033b-4890-a1b4-2a97c47ca461\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"325efa9ce30fd3869d142cc34879fb6ef9d38b4f2e106555f839fca57f5b4a31\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 20:29:11.636487 kubelet[2523]: E0904 20:29:11.636312 2523 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f15b71f8-033b-4890-a1b4-2a97c47ca461\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"325efa9ce30fd3869d142cc34879fb6ef9d38b4f2e106555f839fca57f5b4a31\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-966rt" podUID="f15b71f8-033b-4890-a1b4-2a97c47ca461" Sep 4 20:29:11.651250 containerd[1468]: time="2024-09-04T20:29:11.651023719Z" level=error msg="StopPodSandbox for \"c20c5b63e11f42c5f851e1860eff3f0d0e4e32aa5c6cd2761dd97b96c0be2b58\" failed" error="failed to destroy network for sandbox \"c20c5b63e11f42c5f851e1860eff3f0d0e4e32aa5c6cd2761dd97b96c0be2b58\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 20:29:11.651576 kubelet[2523]: E0904 20:29:11.651479 2523 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c20c5b63e11f42c5f851e1860eff3f0d0e4e32aa5c6cd2761dd97b96c0be2b58\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c20c5b63e11f42c5f851e1860eff3f0d0e4e32aa5c6cd2761dd97b96c0be2b58" Sep 4 20:29:11.651576 kubelet[2523]: E0904 20:29:11.651525 2523 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c20c5b63e11f42c5f851e1860eff3f0d0e4e32aa5c6cd2761dd97b96c0be2b58"} Sep 4 20:29:11.651576 kubelet[2523]: E0904 20:29:11.651559 2523 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f347db77-68c1-4005-a444-424ceab37966\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c20c5b63e11f42c5f851e1860eff3f0d0e4e32aa5c6cd2761dd97b96c0be2b58\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 20:29:11.651786 kubelet[2523]: E0904 20:29:11.651589 2523 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f347db77-68c1-4005-a444-424ceab37966\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c20c5b63e11f42c5f851e1860eff3f0d0e4e32aa5c6cd2761dd97b96c0be2b58\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-fk4tb" podUID="f347db77-68c1-4005-a444-424ceab37966" Sep 4 20:29:12.159588 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c20c5b63e11f42c5f851e1860eff3f0d0e4e32aa5c6cd2761dd97b96c0be2b58-shm.mount: Deactivated successfully. Sep 4 20:29:12.159714 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-325efa9ce30fd3869d142cc34879fb6ef9d38b4f2e106555f839fca57f5b4a31-shm.mount: Deactivated successfully. Sep 4 20:29:12.364752 systemd[1]: Created slice kubepods-besteffort-pod19278f8b_d3ea_467e_a88b_64888b0edecc.slice - libcontainer container kubepods-besteffort-pod19278f8b_d3ea_467e_a88b_64888b0edecc.slice. Sep 4 20:29:12.370556 containerd[1468]: time="2024-09-04T20:29:12.370048049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wkgcv,Uid:19278f8b-d3ea-467e-a88b-64888b0edecc,Namespace:calico-system,Attempt:0,}" Sep 4 20:29:12.466972 containerd[1468]: time="2024-09-04T20:29:12.465231790Z" level=error msg="Failed to destroy network for sandbox \"a94e4351a8bd5f534df25a54132f3fef85187c884ed48a216fb6a0c4009e2a6f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 20:29:12.468259 containerd[1468]: time="2024-09-04T20:29:12.467633833Z" level=error msg="encountered an error cleaning up failed sandbox \"a94e4351a8bd5f534df25a54132f3fef85187c884ed48a216fb6a0c4009e2a6f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 20:29:12.468259 containerd[1468]: time="2024-09-04T20:29:12.467730055Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wkgcv,Uid:19278f8b-d3ea-467e-a88b-64888b0edecc,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a94e4351a8bd5f534df25a54132f3fef85187c884ed48a216fb6a0c4009e2a6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 20:29:12.468430 kubelet[2523]: E0904 20:29:12.468198 2523 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a94e4351a8bd5f534df25a54132f3fef85187c884ed48a216fb6a0c4009e2a6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 20:29:12.468776 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a94e4351a8bd5f534df25a54132f3fef85187c884ed48a216fb6a0c4009e2a6f-shm.mount: Deactivated successfully. Sep 4 20:29:12.470782 kubelet[2523]: E0904 20:29:12.470300 2523 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a94e4351a8bd5f534df25a54132f3fef85187c884ed48a216fb6a0c4009e2a6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wkgcv" Sep 4 20:29:12.470782 kubelet[2523]: E0904 20:29:12.470381 2523 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a94e4351a8bd5f534df25a54132f3fef85187c884ed48a216fb6a0c4009e2a6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wkgcv" Sep 4 20:29:12.470782 kubelet[2523]: E0904 20:29:12.470477 2523 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-wkgcv_calico-system(19278f8b-d3ea-467e-a88b-64888b0edecc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-wkgcv_calico-system(19278f8b-d3ea-467e-a88b-64888b0edecc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a94e4351a8bd5f534df25a54132f3fef85187c884ed48a216fb6a0c4009e2a6f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wkgcv" podUID="19278f8b-d3ea-467e-a88b-64888b0edecc" Sep 4 20:29:12.557520 kubelet[2523]: I0904 20:29:12.557478 2523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a94e4351a8bd5f534df25a54132f3fef85187c884ed48a216fb6a0c4009e2a6f" Sep 4 20:29:12.558877 containerd[1468]: time="2024-09-04T20:29:12.558256592Z" level=info msg="StopPodSandbox for \"a94e4351a8bd5f534df25a54132f3fef85187c884ed48a216fb6a0c4009e2a6f\"" Sep 4 20:29:12.558877 containerd[1468]: time="2024-09-04T20:29:12.558565661Z" level=info msg="Ensure that sandbox a94e4351a8bd5f534df25a54132f3fef85187c884ed48a216fb6a0c4009e2a6f in task-service has been cleanup successfully" Sep 4 20:29:12.593826 containerd[1468]: time="2024-09-04T20:29:12.593728566Z" level=error msg="StopPodSandbox for \"a94e4351a8bd5f534df25a54132f3fef85187c884ed48a216fb6a0c4009e2a6f\" failed" error="failed to destroy network for sandbox \"a94e4351a8bd5f534df25a54132f3fef85187c884ed48a216fb6a0c4009e2a6f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 20:29:12.594091 kubelet[2523]: E0904 20:29:12.594004 2523 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a94e4351a8bd5f534df25a54132f3fef85187c884ed48a216fb6a0c4009e2a6f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a94e4351a8bd5f534df25a54132f3fef85187c884ed48a216fb6a0c4009e2a6f" Sep 4 20:29:12.594091 kubelet[2523]: E0904 20:29:12.594050 2523 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a94e4351a8bd5f534df25a54132f3fef85187c884ed48a216fb6a0c4009e2a6f"} Sep 4 20:29:12.594237 kubelet[2523]: E0904 20:29:12.594127 2523 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"19278f8b-d3ea-467e-a88b-64888b0edecc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a94e4351a8bd5f534df25a54132f3fef85187c884ed48a216fb6a0c4009e2a6f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 20:29:12.594353 kubelet[2523]: E0904 20:29:12.594288 2523 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"19278f8b-d3ea-467e-a88b-64888b0edecc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a94e4351a8bd5f534df25a54132f3fef85187c884ed48a216fb6a0c4009e2a6f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wkgcv" podUID="19278f8b-d3ea-467e-a88b-64888b0edecc" Sep 4 20:29:17.435842 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount182406230.mount: Deactivated successfully. Sep 4 20:29:17.680406 containerd[1468]: time="2024-09-04T20:29:17.606667661Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.1: active requests=0, bytes read=117873564" Sep 4 20:29:17.697695 containerd[1468]: time="2024-09-04T20:29:17.696952151Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.1\" with image id \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\", size \"117873426\" in 6.129466096s" Sep 4 20:29:17.697695 containerd[1468]: time="2024-09-04T20:29:17.697035554Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\" returns image reference \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\"" Sep 4 20:29:17.708972 containerd[1468]: time="2024-09-04T20:29:17.708865841Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:29:17.738263 containerd[1468]: time="2024-09-04T20:29:17.738184301Z" level=info msg="ImageCreate event name:\"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:29:17.739435 containerd[1468]: time="2024-09-04T20:29:17.738974780Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:29:17.813521 containerd[1468]: time="2024-09-04T20:29:17.813458840Z" level=info msg="CreateContainer within sandbox \"7432b0e4682befc7e1ecd8b39cde399f994b33f91585620433433622071fc475\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 4 20:29:17.952133 containerd[1468]: time="2024-09-04T20:29:17.951866678Z" level=info msg="CreateContainer within sandbox \"7432b0e4682befc7e1ecd8b39cde399f994b33f91585620433433622071fc475\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"4e070dc6f35004cbbd74365202576a2e31739bc8dc8d7894b55803723cbd0a3d\"" Sep 4 20:29:17.965241 containerd[1468]: time="2024-09-04T20:29:17.965141126Z" level=info msg="StartContainer for \"4e070dc6f35004cbbd74365202576a2e31739bc8dc8d7894b55803723cbd0a3d\"" Sep 4 20:29:18.070377 systemd[1]: Started cri-containerd-4e070dc6f35004cbbd74365202576a2e31739bc8dc8d7894b55803723cbd0a3d.scope - libcontainer container 4e070dc6f35004cbbd74365202576a2e31739bc8dc8d7894b55803723cbd0a3d. Sep 4 20:29:18.147724 containerd[1468]: time="2024-09-04T20:29:18.146628053Z" level=info msg="StartContainer for \"4e070dc6f35004cbbd74365202576a2e31739bc8dc8d7894b55803723cbd0a3d\" returns successfully" Sep 4 20:29:18.284475 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 4 20:29:18.285570 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 4 20:29:18.698300 kubelet[2523]: E0904 20:29:18.698161 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 20:29:18.885114 kubelet[2523]: I0904 20:29:18.884987 2523 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-n9trv" podStartSLOduration=1.88329426 podCreationTimestamp="2024-09-04 20:29:00 +0000 UTC" firstStartedPulling="2024-09-04 20:29:00.731645731 +0000 UTC m=+21.574627795" lastFinishedPulling="2024-09-04 20:29:17.703950618 +0000 UTC m=+38.546932730" observedRunningTime="2024-09-04 20:29:18.757819057 +0000 UTC m=+39.600801150" watchObservedRunningTime="2024-09-04 20:29:18.855599195 +0000 UTC m=+39.698581285" Sep 4 20:29:19.713117 kubelet[2523]: E0904 20:29:19.712546 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 20:29:19.734374 systemd[1]: run-containerd-runc-k8s.io-4e070dc6f35004cbbd74365202576a2e31739bc8dc8d7894b55803723cbd0a3d-runc.5pS4z1.mount: Deactivated successfully. Sep 4 20:29:20.693116 kernel: bpftool[3661]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 4 20:29:20.704362 kubelet[2523]: E0904 20:29:20.704049 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 20:29:21.070584 systemd-networkd[1370]: vxlan.calico: Link UP Sep 4 20:29:21.070598 systemd-networkd[1370]: vxlan.calico: Gained carrier Sep 4 20:29:22.229485 systemd[1]: Started sshd@9-143.198.146.52:22-139.178.68.195:59474.service - OpenSSH per-connection server daemon (139.178.68.195:59474). Sep 4 20:29:22.307441 sshd[3758]: Accepted publickey for core from 139.178.68.195 port 59474 ssh2: RSA SHA256:6m86ErQYPfwi49NZRVftW/USO9k3FxgPtHd71f+HMpY Sep 4 20:29:22.310354 sshd[3758]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 20:29:22.319237 systemd-logind[1447]: New session 10 of user core. Sep 4 20:29:22.325423 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 4 20:29:22.543033 sshd[3758]: pam_unix(sshd:session): session closed for user core Sep 4 20:29:22.547764 systemd[1]: sshd@9-143.198.146.52:22-139.178.68.195:59474.service: Deactivated successfully. Sep 4 20:29:22.550875 systemd[1]: session-10.scope: Deactivated successfully. Sep 4 20:29:22.552482 systemd-logind[1447]: Session 10 logged out. Waiting for processes to exit. Sep 4 20:29:22.554526 systemd-logind[1447]: Removed session 10. Sep 4 20:29:22.828479 systemd-networkd[1370]: vxlan.calico: Gained IPv6LL Sep 4 20:29:24.368801 containerd[1468]: time="2024-09-04T20:29:24.368680210Z" level=info msg="StopPodSandbox for \"28b8d38c9da2f890fe1c6b71d6e31746a3708691e1e16dd95d179008d2b85049\"" Sep 4 20:29:24.590824 containerd[1468]: 2024-09-04 20:29:24.457 [INFO][3785] k8s.go 608: Cleaning up netns ContainerID="28b8d38c9da2f890fe1c6b71d6e31746a3708691e1e16dd95d179008d2b85049" Sep 4 20:29:24.590824 containerd[1468]: 2024-09-04 20:29:24.460 [INFO][3785] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="28b8d38c9da2f890fe1c6b71d6e31746a3708691e1e16dd95d179008d2b85049" iface="eth0" netns="/var/run/netns/cni-f7fd216d-3273-367e-4cc8-efef2eca7dcf" Sep 4 20:29:24.590824 containerd[1468]: 2024-09-04 20:29:24.461 [INFO][3785] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="28b8d38c9da2f890fe1c6b71d6e31746a3708691e1e16dd95d179008d2b85049" iface="eth0" netns="/var/run/netns/cni-f7fd216d-3273-367e-4cc8-efef2eca7dcf" Sep 4 20:29:24.590824 containerd[1468]: 2024-09-04 20:29:24.462 [INFO][3785] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="28b8d38c9da2f890fe1c6b71d6e31746a3708691e1e16dd95d179008d2b85049" iface="eth0" netns="/var/run/netns/cni-f7fd216d-3273-367e-4cc8-efef2eca7dcf" Sep 4 20:29:24.590824 containerd[1468]: 2024-09-04 20:29:24.462 [INFO][3785] k8s.go 615: Releasing IP address(es) ContainerID="28b8d38c9da2f890fe1c6b71d6e31746a3708691e1e16dd95d179008d2b85049" Sep 4 20:29:24.590824 containerd[1468]: 2024-09-04 20:29:24.462 [INFO][3785] utils.go 188: Calico CNI releasing IP address ContainerID="28b8d38c9da2f890fe1c6b71d6e31746a3708691e1e16dd95d179008d2b85049" Sep 4 20:29:24.590824 containerd[1468]: 2024-09-04 20:29:24.566 [INFO][3791] ipam_plugin.go 417: Releasing address using handleID ContainerID="28b8d38c9da2f890fe1c6b71d6e31746a3708691e1e16dd95d179008d2b85049" HandleID="k8s-pod-network.28b8d38c9da2f890fe1c6b71d6e31746a3708691e1e16dd95d179008d2b85049" Workload="ci--3975.2.1--5--b3ba9b7107-k8s-calico--kube--controllers--d85dc74c8--bfxxg-eth0" Sep 4 20:29:24.590824 containerd[1468]: 2024-09-04 20:29:24.567 [INFO][3791] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 20:29:24.590824 containerd[1468]: 2024-09-04 20:29:24.567 [INFO][3791] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 20:29:24.590824 containerd[1468]: 2024-09-04 20:29:24.579 [WARNING][3791] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="28b8d38c9da2f890fe1c6b71d6e31746a3708691e1e16dd95d179008d2b85049" HandleID="k8s-pod-network.28b8d38c9da2f890fe1c6b71d6e31746a3708691e1e16dd95d179008d2b85049" Workload="ci--3975.2.1--5--b3ba9b7107-k8s-calico--kube--controllers--d85dc74c8--bfxxg-eth0" Sep 4 20:29:24.590824 containerd[1468]: 2024-09-04 20:29:24.580 [INFO][3791] ipam_plugin.go 445: Releasing address using workloadID ContainerID="28b8d38c9da2f890fe1c6b71d6e31746a3708691e1e16dd95d179008d2b85049" HandleID="k8s-pod-network.28b8d38c9da2f890fe1c6b71d6e31746a3708691e1e16dd95d179008d2b85049" Workload="ci--3975.2.1--5--b3ba9b7107-k8s-calico--kube--controllers--d85dc74c8--bfxxg-eth0" Sep 4 20:29:24.590824 containerd[1468]: 2024-09-04 20:29:24.583 [INFO][3791] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 20:29:24.590824 containerd[1468]: 2024-09-04 20:29:24.585 [INFO][3785] k8s.go 621: Teardown processing complete. ContainerID="28b8d38c9da2f890fe1c6b71d6e31746a3708691e1e16dd95d179008d2b85049" Sep 4 20:29:24.595731 containerd[1468]: time="2024-09-04T20:29:24.591551226Z" level=info msg="TearDown network for sandbox \"28b8d38c9da2f890fe1c6b71d6e31746a3708691e1e16dd95d179008d2b85049\" successfully" Sep 4 20:29:24.595731 containerd[1468]: time="2024-09-04T20:29:24.591603183Z" level=info msg="StopPodSandbox for \"28b8d38c9da2f890fe1c6b71d6e31746a3708691e1e16dd95d179008d2b85049\" returns successfully" Sep 4 20:29:24.595731 containerd[1468]: time="2024-09-04T20:29:24.593905049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d85dc74c8-bfxxg,Uid:e4b0c9ae-e6bb-4436-8d94-867c1964daae,Namespace:calico-system,Attempt:1,}" Sep 4 20:29:24.598151 systemd[1]: run-netns-cni\x2df7fd216d\x2d3273\x2d367e\x2d4cc8\x2defef2eca7dcf.mount: Deactivated successfully. Sep 4 20:29:24.834578 systemd-networkd[1370]: califec74167e17: Link UP Sep 4 20:29:24.836194 systemd-networkd[1370]: califec74167e17: Gained carrier Sep 4 20:29:24.867134 containerd[1468]: 2024-09-04 20:29:24.676 [INFO][3798] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.2.1--5--b3ba9b7107-k8s-calico--kube--controllers--d85dc74c8--bfxxg-eth0 calico-kube-controllers-d85dc74c8- calico-system e4b0c9ae-e6bb-4436-8d94-867c1964daae 794 0 2024-09-04 20:29:00 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:d85dc74c8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-3975.2.1-5-b3ba9b7107 calico-kube-controllers-d85dc74c8-bfxxg eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] califec74167e17 [] []}} ContainerID="23d93c9190d4e7d3be1864d02b137d148cc40d2471c8e2080a8e4a46770f424b" Namespace="calico-system" Pod="calico-kube-controllers-d85dc74c8-bfxxg" WorkloadEndpoint="ci--3975.2.1--5--b3ba9b7107-k8s-calico--kube--controllers--d85dc74c8--bfxxg-" Sep 4 20:29:24.867134 containerd[1468]: 2024-09-04 20:29:24.676 [INFO][3798] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="23d93c9190d4e7d3be1864d02b137d148cc40d2471c8e2080a8e4a46770f424b" Namespace="calico-system" Pod="calico-kube-controllers-d85dc74c8-bfxxg" WorkloadEndpoint="ci--3975.2.1--5--b3ba9b7107-k8s-calico--kube--controllers--d85dc74c8--bfxxg-eth0" Sep 4 20:29:24.867134 containerd[1468]: 2024-09-04 20:29:24.771 [INFO][3809] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="23d93c9190d4e7d3be1864d02b137d148cc40d2471c8e2080a8e4a46770f424b" HandleID="k8s-pod-network.23d93c9190d4e7d3be1864d02b137d148cc40d2471c8e2080a8e4a46770f424b" Workload="ci--3975.2.1--5--b3ba9b7107-k8s-calico--kube--controllers--d85dc74c8--bfxxg-eth0" Sep 4 20:29:24.867134 containerd[1468]: 2024-09-04 20:29:24.785 [INFO][3809] ipam_plugin.go 270: Auto assigning IP ContainerID="23d93c9190d4e7d3be1864d02b137d148cc40d2471c8e2080a8e4a46770f424b" HandleID="k8s-pod-network.23d93c9190d4e7d3be1864d02b137d148cc40d2471c8e2080a8e4a46770f424b" Workload="ci--3975.2.1--5--b3ba9b7107-k8s-calico--kube--controllers--d85dc74c8--bfxxg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000265e40), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3975.2.1-5-b3ba9b7107", "pod":"calico-kube-controllers-d85dc74c8-bfxxg", "timestamp":"2024-09-04 20:29:24.771899273 +0000 UTC"}, Hostname:"ci-3975.2.1-5-b3ba9b7107", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 20:29:24.867134 containerd[1468]: 2024-09-04 20:29:24.785 [INFO][3809] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 20:29:24.867134 containerd[1468]: 2024-09-04 20:29:24.785 [INFO][3809] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 20:29:24.867134 containerd[1468]: 2024-09-04 20:29:24.785 [INFO][3809] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.2.1-5-b3ba9b7107' Sep 4 20:29:24.867134 containerd[1468]: 2024-09-04 20:29:24.788 [INFO][3809] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.23d93c9190d4e7d3be1864d02b137d148cc40d2471c8e2080a8e4a46770f424b" host="ci-3975.2.1-5-b3ba9b7107" Sep 4 20:29:24.867134 containerd[1468]: 2024-09-04 20:29:24.797 [INFO][3809] ipam.go 372: Looking up existing affinities for host host="ci-3975.2.1-5-b3ba9b7107" Sep 4 20:29:24.867134 containerd[1468]: 2024-09-04 20:29:24.804 [INFO][3809] ipam.go 489: Trying affinity for 192.168.6.0/26 host="ci-3975.2.1-5-b3ba9b7107" Sep 4 20:29:24.867134 containerd[1468]: 2024-09-04 20:29:24.807 [INFO][3809] ipam.go 155: Attempting to load block cidr=192.168.6.0/26 host="ci-3975.2.1-5-b3ba9b7107" Sep 4 20:29:24.867134 containerd[1468]: 2024-09-04 20:29:24.810 [INFO][3809] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.6.0/26 host="ci-3975.2.1-5-b3ba9b7107" Sep 4 20:29:24.867134 containerd[1468]: 2024-09-04 20:29:24.810 [INFO][3809] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.6.0/26 handle="k8s-pod-network.23d93c9190d4e7d3be1864d02b137d148cc40d2471c8e2080a8e4a46770f424b" host="ci-3975.2.1-5-b3ba9b7107" Sep 4 20:29:24.867134 containerd[1468]: 2024-09-04 20:29:24.812 [INFO][3809] ipam.go 1685: Creating new handle: k8s-pod-network.23d93c9190d4e7d3be1864d02b137d148cc40d2471c8e2080a8e4a46770f424b Sep 4 20:29:24.867134 containerd[1468]: 2024-09-04 20:29:24.818 [INFO][3809] ipam.go 1203: Writing block in order to claim IPs block=192.168.6.0/26 handle="k8s-pod-network.23d93c9190d4e7d3be1864d02b137d148cc40d2471c8e2080a8e4a46770f424b" host="ci-3975.2.1-5-b3ba9b7107" Sep 4 20:29:24.867134 containerd[1468]: 2024-09-04 20:29:24.824 [INFO][3809] ipam.go 1216: Successfully claimed IPs: [192.168.6.1/26] block=192.168.6.0/26 handle="k8s-pod-network.23d93c9190d4e7d3be1864d02b137d148cc40d2471c8e2080a8e4a46770f424b" host="ci-3975.2.1-5-b3ba9b7107" Sep 4 20:29:24.867134 containerd[1468]: 2024-09-04 20:29:24.824 [INFO][3809] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.6.1/26] handle="k8s-pod-network.23d93c9190d4e7d3be1864d02b137d148cc40d2471c8e2080a8e4a46770f424b" host="ci-3975.2.1-5-b3ba9b7107" Sep 4 20:29:24.867134 containerd[1468]: 2024-09-04 20:29:24.825 [INFO][3809] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 20:29:24.867134 containerd[1468]: 2024-09-04 20:29:24.825 [INFO][3809] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.6.1/26] IPv6=[] ContainerID="23d93c9190d4e7d3be1864d02b137d148cc40d2471c8e2080a8e4a46770f424b" HandleID="k8s-pod-network.23d93c9190d4e7d3be1864d02b137d148cc40d2471c8e2080a8e4a46770f424b" Workload="ci--3975.2.1--5--b3ba9b7107-k8s-calico--kube--controllers--d85dc74c8--bfxxg-eth0" Sep 4 20:29:24.869038 containerd[1468]: 2024-09-04 20:29:24.829 [INFO][3798] k8s.go 386: Populated endpoint ContainerID="23d93c9190d4e7d3be1864d02b137d148cc40d2471c8e2080a8e4a46770f424b" Namespace="calico-system" Pod="calico-kube-controllers-d85dc74c8-bfxxg" WorkloadEndpoint="ci--3975.2.1--5--b3ba9b7107-k8s-calico--kube--controllers--d85dc74c8--bfxxg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--5--b3ba9b7107-k8s-calico--kube--controllers--d85dc74c8--bfxxg-eth0", GenerateName:"calico-kube-controllers-d85dc74c8-", Namespace:"calico-system", SelfLink:"", UID:"e4b0c9ae-e6bb-4436-8d94-867c1964daae", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 20, 29, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"d85dc74c8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-5-b3ba9b7107", ContainerID:"", Pod:"calico-kube-controllers-d85dc74c8-bfxxg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.6.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califec74167e17", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 20:29:24.869038 containerd[1468]: 2024-09-04 20:29:24.829 [INFO][3798] k8s.go 387: Calico CNI using IPs: [192.168.6.1/32] ContainerID="23d93c9190d4e7d3be1864d02b137d148cc40d2471c8e2080a8e4a46770f424b" Namespace="calico-system" Pod="calico-kube-controllers-d85dc74c8-bfxxg" WorkloadEndpoint="ci--3975.2.1--5--b3ba9b7107-k8s-calico--kube--controllers--d85dc74c8--bfxxg-eth0" Sep 4 20:29:24.869038 containerd[1468]: 2024-09-04 20:29:24.829 [INFO][3798] dataplane_linux.go 68: Setting the host side veth name to califec74167e17 ContainerID="23d93c9190d4e7d3be1864d02b137d148cc40d2471c8e2080a8e4a46770f424b" Namespace="calico-system" Pod="calico-kube-controllers-d85dc74c8-bfxxg" WorkloadEndpoint="ci--3975.2.1--5--b3ba9b7107-k8s-calico--kube--controllers--d85dc74c8--bfxxg-eth0" Sep 4 20:29:24.869038 containerd[1468]: 2024-09-04 20:29:24.835 [INFO][3798] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="23d93c9190d4e7d3be1864d02b137d148cc40d2471c8e2080a8e4a46770f424b" Namespace="calico-system" Pod="calico-kube-controllers-d85dc74c8-bfxxg" WorkloadEndpoint="ci--3975.2.1--5--b3ba9b7107-k8s-calico--kube--controllers--d85dc74c8--bfxxg-eth0" Sep 4 20:29:24.869038 containerd[1468]: 2024-09-04 20:29:24.837 [INFO][3798] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="23d93c9190d4e7d3be1864d02b137d148cc40d2471c8e2080a8e4a46770f424b" Namespace="calico-system" Pod="calico-kube-controllers-d85dc74c8-bfxxg" WorkloadEndpoint="ci--3975.2.1--5--b3ba9b7107-k8s-calico--kube--controllers--d85dc74c8--bfxxg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--5--b3ba9b7107-k8s-calico--kube--controllers--d85dc74c8--bfxxg-eth0", GenerateName:"calico-kube-controllers-d85dc74c8-", Namespace:"calico-system", SelfLink:"", UID:"e4b0c9ae-e6bb-4436-8d94-867c1964daae", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 20, 29, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"d85dc74c8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-5-b3ba9b7107", ContainerID:"23d93c9190d4e7d3be1864d02b137d148cc40d2471c8e2080a8e4a46770f424b", Pod:"calico-kube-controllers-d85dc74c8-bfxxg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.6.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califec74167e17", MAC:"22:d7:15:c6:a4:c9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 20:29:24.869038 containerd[1468]: 2024-09-04 20:29:24.852 [INFO][3798] k8s.go 500: Wrote updated endpoint to datastore ContainerID="23d93c9190d4e7d3be1864d02b137d148cc40d2471c8e2080a8e4a46770f424b" Namespace="calico-system" Pod="calico-kube-controllers-d85dc74c8-bfxxg" WorkloadEndpoint="ci--3975.2.1--5--b3ba9b7107-k8s-calico--kube--controllers--d85dc74c8--bfxxg-eth0" Sep 4 20:29:24.927593 containerd[1468]: time="2024-09-04T20:29:24.922229347Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 20:29:24.927593 containerd[1468]: time="2024-09-04T20:29:24.927350513Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 20:29:24.927593 containerd[1468]: time="2024-09-04T20:29:24.927416718Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 20:29:24.927593 containerd[1468]: time="2024-09-04T20:29:24.927440025Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 20:29:24.962448 systemd[1]: Started cri-containerd-23d93c9190d4e7d3be1864d02b137d148cc40d2471c8e2080a8e4a46770f424b.scope - libcontainer container 23d93c9190d4e7d3be1864d02b137d148cc40d2471c8e2080a8e4a46770f424b. Sep 4 20:29:25.020621 containerd[1468]: time="2024-09-04T20:29:25.020554613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d85dc74c8-bfxxg,Uid:e4b0c9ae-e6bb-4436-8d94-867c1964daae,Namespace:calico-system,Attempt:1,} returns sandbox id \"23d93c9190d4e7d3be1864d02b137d148cc40d2471c8e2080a8e4a46770f424b\"" Sep 4 20:29:25.024903 containerd[1468]: time="2024-09-04T20:29:25.024773187Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\"" Sep 4 20:29:25.360294 containerd[1468]: time="2024-09-04T20:29:25.359753442Z" level=info msg="StopPodSandbox for \"a94e4351a8bd5f534df25a54132f3fef85187c884ed48a216fb6a0c4009e2a6f\"" Sep 4 20:29:25.480781 containerd[1468]: 2024-09-04 20:29:25.429 [INFO][3886] k8s.go 608: Cleaning up netns ContainerID="a94e4351a8bd5f534df25a54132f3fef85187c884ed48a216fb6a0c4009e2a6f" Sep 4 20:29:25.480781 containerd[1468]: 2024-09-04 20:29:25.432 [INFO][3886] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="a94e4351a8bd5f534df25a54132f3fef85187c884ed48a216fb6a0c4009e2a6f" iface="eth0" netns="/var/run/netns/cni-0f19be64-da9b-b582-43d8-65ba00fee82b" Sep 4 20:29:25.480781 containerd[1468]: 2024-09-04 20:29:25.432 [INFO][3886] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="a94e4351a8bd5f534df25a54132f3fef85187c884ed48a216fb6a0c4009e2a6f" iface="eth0" netns="/var/run/netns/cni-0f19be64-da9b-b582-43d8-65ba00fee82b" Sep 4 20:29:25.480781 containerd[1468]: 2024-09-04 20:29:25.433 [INFO][3886] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="a94e4351a8bd5f534df25a54132f3fef85187c884ed48a216fb6a0c4009e2a6f" iface="eth0" netns="/var/run/netns/cni-0f19be64-da9b-b582-43d8-65ba00fee82b" Sep 4 20:29:25.480781 containerd[1468]: 2024-09-04 20:29:25.433 [INFO][3886] k8s.go 615: Releasing IP address(es) ContainerID="a94e4351a8bd5f534df25a54132f3fef85187c884ed48a216fb6a0c4009e2a6f" Sep 4 20:29:25.480781 containerd[1468]: 2024-09-04 20:29:25.433 [INFO][3886] utils.go 188: Calico CNI releasing IP address ContainerID="a94e4351a8bd5f534df25a54132f3fef85187c884ed48a216fb6a0c4009e2a6f" Sep 4 20:29:25.480781 containerd[1468]: 2024-09-04 20:29:25.459 [INFO][3892] ipam_plugin.go 417: Releasing address using handleID ContainerID="a94e4351a8bd5f534df25a54132f3fef85187c884ed48a216fb6a0c4009e2a6f" HandleID="k8s-pod-network.a94e4351a8bd5f534df25a54132f3fef85187c884ed48a216fb6a0c4009e2a6f" Workload="ci--3975.2.1--5--b3ba9b7107-k8s-csi--node--driver--wkgcv-eth0" Sep 4 20:29:25.480781 containerd[1468]: 2024-09-04 20:29:25.460 [INFO][3892] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 20:29:25.480781 containerd[1468]: 2024-09-04 20:29:25.460 [INFO][3892] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 20:29:25.480781 containerd[1468]: 2024-09-04 20:29:25.472 [WARNING][3892] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="a94e4351a8bd5f534df25a54132f3fef85187c884ed48a216fb6a0c4009e2a6f" HandleID="k8s-pod-network.a94e4351a8bd5f534df25a54132f3fef85187c884ed48a216fb6a0c4009e2a6f" Workload="ci--3975.2.1--5--b3ba9b7107-k8s-csi--node--driver--wkgcv-eth0" Sep 4 20:29:25.480781 containerd[1468]: 2024-09-04 20:29:25.472 [INFO][3892] ipam_plugin.go 445: Releasing address using workloadID ContainerID="a94e4351a8bd5f534df25a54132f3fef85187c884ed48a216fb6a0c4009e2a6f" HandleID="k8s-pod-network.a94e4351a8bd5f534df25a54132f3fef85187c884ed48a216fb6a0c4009e2a6f" Workload="ci--3975.2.1--5--b3ba9b7107-k8s-csi--node--driver--wkgcv-eth0" Sep 4 20:29:25.480781 containerd[1468]: 2024-09-04 20:29:25.475 [INFO][3892] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 20:29:25.480781 containerd[1468]: 2024-09-04 20:29:25.478 [INFO][3886] k8s.go 621: Teardown processing complete. ContainerID="a94e4351a8bd5f534df25a54132f3fef85187c884ed48a216fb6a0c4009e2a6f" Sep 4 20:29:25.481787 containerd[1468]: time="2024-09-04T20:29:25.481609969Z" level=info msg="TearDown network for sandbox \"a94e4351a8bd5f534df25a54132f3fef85187c884ed48a216fb6a0c4009e2a6f\" successfully" Sep 4 20:29:25.481787 containerd[1468]: time="2024-09-04T20:29:25.481656486Z" level=info msg="StopPodSandbox for \"a94e4351a8bd5f534df25a54132f3fef85187c884ed48a216fb6a0c4009e2a6f\" returns successfully" Sep 4 20:29:25.483235 containerd[1468]: time="2024-09-04T20:29:25.482774376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wkgcv,Uid:19278f8b-d3ea-467e-a88b-64888b0edecc,Namespace:calico-system,Attempt:1,}" Sep 4 20:29:25.594845 systemd[1]: run-netns-cni\x2d0f19be64\x2dda9b\x2db582\x2d43d8\x2d65ba00fee82b.mount: Deactivated successfully. Sep 4 20:29:25.666018 systemd-networkd[1370]: calif8f6a08191f: Link UP Sep 4 20:29:25.667875 systemd-networkd[1370]: calif8f6a08191f: Gained carrier Sep 4 20:29:25.694005 containerd[1468]: 2024-09-04 20:29:25.540 [INFO][3899] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.2.1--5--b3ba9b7107-k8s-csi--node--driver--wkgcv-eth0 csi-node-driver- calico-system 19278f8b-d3ea-467e-a88b-64888b0edecc 806 0 2024-09-04 20:29:00 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:78cd84fb8c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ci-3975.2.1-5-b3ba9b7107 csi-node-driver-wkgcv eth0 default [] [] [kns.calico-system ksa.calico-system.default] calif8f6a08191f [] []}} ContainerID="9dc32a33ce8f9bfe77e542d6efef5d1bb3712e061549c74093edcd3560aaf267" Namespace="calico-system" Pod="csi-node-driver-wkgcv" WorkloadEndpoint="ci--3975.2.1--5--b3ba9b7107-k8s-csi--node--driver--wkgcv-" Sep 4 20:29:25.694005 containerd[1468]: 2024-09-04 20:29:25.541 [INFO][3899] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9dc32a33ce8f9bfe77e542d6efef5d1bb3712e061549c74093edcd3560aaf267" Namespace="calico-system" Pod="csi-node-driver-wkgcv" WorkloadEndpoint="ci--3975.2.1--5--b3ba9b7107-k8s-csi--node--driver--wkgcv-eth0" Sep 4 20:29:25.694005 containerd[1468]: 2024-09-04 20:29:25.605 [INFO][3910] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9dc32a33ce8f9bfe77e542d6efef5d1bb3712e061549c74093edcd3560aaf267" HandleID="k8s-pod-network.9dc32a33ce8f9bfe77e542d6efef5d1bb3712e061549c74093edcd3560aaf267" Workload="ci--3975.2.1--5--b3ba9b7107-k8s-csi--node--driver--wkgcv-eth0" Sep 4 20:29:25.694005 containerd[1468]: 2024-09-04 20:29:25.617 [INFO][3910] ipam_plugin.go 270: Auto assigning IP ContainerID="9dc32a33ce8f9bfe77e542d6efef5d1bb3712e061549c74093edcd3560aaf267" HandleID="k8s-pod-network.9dc32a33ce8f9bfe77e542d6efef5d1bb3712e061549c74093edcd3560aaf267" Workload="ci--3975.2.1--5--b3ba9b7107-k8s-csi--node--driver--wkgcv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002deab0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3975.2.1-5-b3ba9b7107", "pod":"csi-node-driver-wkgcv", "timestamp":"2024-09-04 20:29:25.605006936 +0000 UTC"}, Hostname:"ci-3975.2.1-5-b3ba9b7107", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 20:29:25.694005 containerd[1468]: 2024-09-04 20:29:25.617 [INFO][3910] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 20:29:25.694005 containerd[1468]: 2024-09-04 20:29:25.617 [INFO][3910] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 20:29:25.694005 containerd[1468]: 2024-09-04 20:29:25.617 [INFO][3910] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.2.1-5-b3ba9b7107' Sep 4 20:29:25.694005 containerd[1468]: 2024-09-04 20:29:25.621 [INFO][3910] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9dc32a33ce8f9bfe77e542d6efef5d1bb3712e061549c74093edcd3560aaf267" host="ci-3975.2.1-5-b3ba9b7107" Sep 4 20:29:25.694005 containerd[1468]: 2024-09-04 20:29:25.631 [INFO][3910] ipam.go 372: Looking up existing affinities for host host="ci-3975.2.1-5-b3ba9b7107" Sep 4 20:29:25.694005 containerd[1468]: 2024-09-04 20:29:25.639 [INFO][3910] ipam.go 489: Trying affinity for 192.168.6.0/26 host="ci-3975.2.1-5-b3ba9b7107" Sep 4 20:29:25.694005 containerd[1468]: 2024-09-04 20:29:25.643 [INFO][3910] ipam.go 155: Attempting to load block cidr=192.168.6.0/26 host="ci-3975.2.1-5-b3ba9b7107" Sep 4 20:29:25.694005 containerd[1468]: 2024-09-04 20:29:25.646 [INFO][3910] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.6.0/26 host="ci-3975.2.1-5-b3ba9b7107" Sep 4 20:29:25.694005 containerd[1468]: 2024-09-04 20:29:25.646 [INFO][3910] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.6.0/26 handle="k8s-pod-network.9dc32a33ce8f9bfe77e542d6efef5d1bb3712e061549c74093edcd3560aaf267" host="ci-3975.2.1-5-b3ba9b7107" Sep 4 20:29:25.694005 containerd[1468]: 2024-09-04 20:29:25.648 [INFO][3910] ipam.go 1685: Creating new handle: k8s-pod-network.9dc32a33ce8f9bfe77e542d6efef5d1bb3712e061549c74093edcd3560aaf267 Sep 4 20:29:25.694005 containerd[1468]: 2024-09-04 20:29:25.653 [INFO][3910] ipam.go 1203: Writing block in order to claim IPs block=192.168.6.0/26 handle="k8s-pod-network.9dc32a33ce8f9bfe77e542d6efef5d1bb3712e061549c74093edcd3560aaf267" host="ci-3975.2.1-5-b3ba9b7107" Sep 4 20:29:25.694005 containerd[1468]: 2024-09-04 20:29:25.660 [INFO][3910] ipam.go 1216: Successfully claimed IPs: [192.168.6.2/26] block=192.168.6.0/26 handle="k8s-pod-network.9dc32a33ce8f9bfe77e542d6efef5d1bb3712e061549c74093edcd3560aaf267" host="ci-3975.2.1-5-b3ba9b7107" Sep 4 20:29:25.694005 containerd[1468]: 2024-09-04 20:29:25.660 [INFO][3910] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.6.2/26] handle="k8s-pod-network.9dc32a33ce8f9bfe77e542d6efef5d1bb3712e061549c74093edcd3560aaf267" host="ci-3975.2.1-5-b3ba9b7107" Sep 4 20:29:25.694005 containerd[1468]: 2024-09-04 20:29:25.660 [INFO][3910] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 20:29:25.694005 containerd[1468]: 2024-09-04 20:29:25.660 [INFO][3910] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.6.2/26] IPv6=[] ContainerID="9dc32a33ce8f9bfe77e542d6efef5d1bb3712e061549c74093edcd3560aaf267" HandleID="k8s-pod-network.9dc32a33ce8f9bfe77e542d6efef5d1bb3712e061549c74093edcd3560aaf267" Workload="ci--3975.2.1--5--b3ba9b7107-k8s-csi--node--driver--wkgcv-eth0" Sep 4 20:29:25.694752 containerd[1468]: 2024-09-04 20:29:25.663 [INFO][3899] k8s.go 386: Populated endpoint ContainerID="9dc32a33ce8f9bfe77e542d6efef5d1bb3712e061549c74093edcd3560aaf267" Namespace="calico-system" Pod="csi-node-driver-wkgcv" WorkloadEndpoint="ci--3975.2.1--5--b3ba9b7107-k8s-csi--node--driver--wkgcv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--5--b3ba9b7107-k8s-csi--node--driver--wkgcv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"19278f8b-d3ea-467e-a88b-64888b0edecc", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 20, 29, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-5-b3ba9b7107", ContainerID:"", Pod:"csi-node-driver-wkgcv", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.6.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calif8f6a08191f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 20:29:25.694752 containerd[1468]: 2024-09-04 20:29:25.663 [INFO][3899] k8s.go 387: Calico CNI using IPs: [192.168.6.2/32] ContainerID="9dc32a33ce8f9bfe77e542d6efef5d1bb3712e061549c74093edcd3560aaf267" Namespace="calico-system" Pod="csi-node-driver-wkgcv" WorkloadEndpoint="ci--3975.2.1--5--b3ba9b7107-k8s-csi--node--driver--wkgcv-eth0" Sep 4 20:29:25.694752 containerd[1468]: 2024-09-04 20:29:25.663 [INFO][3899] dataplane_linux.go 68: Setting the host side veth name to calif8f6a08191f ContainerID="9dc32a33ce8f9bfe77e542d6efef5d1bb3712e061549c74093edcd3560aaf267" Namespace="calico-system" Pod="csi-node-driver-wkgcv" WorkloadEndpoint="ci--3975.2.1--5--b3ba9b7107-k8s-csi--node--driver--wkgcv-eth0" Sep 4 20:29:25.694752 containerd[1468]: 2024-09-04 20:29:25.667 [INFO][3899] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="9dc32a33ce8f9bfe77e542d6efef5d1bb3712e061549c74093edcd3560aaf267" Namespace="calico-system" Pod="csi-node-driver-wkgcv" WorkloadEndpoint="ci--3975.2.1--5--b3ba9b7107-k8s-csi--node--driver--wkgcv-eth0" Sep 4 20:29:25.694752 containerd[1468]: 2024-09-04 20:29:25.669 [INFO][3899] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9dc32a33ce8f9bfe77e542d6efef5d1bb3712e061549c74093edcd3560aaf267" Namespace="calico-system" Pod="csi-node-driver-wkgcv" WorkloadEndpoint="ci--3975.2.1--5--b3ba9b7107-k8s-csi--node--driver--wkgcv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--5--b3ba9b7107-k8s-csi--node--driver--wkgcv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"19278f8b-d3ea-467e-a88b-64888b0edecc", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 20, 29, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-5-b3ba9b7107", ContainerID:"9dc32a33ce8f9bfe77e542d6efef5d1bb3712e061549c74093edcd3560aaf267", Pod:"csi-node-driver-wkgcv", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.6.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calif8f6a08191f", MAC:"5a:31:d3:8b:73:0c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 20:29:25.694752 containerd[1468]: 2024-09-04 20:29:25.687 [INFO][3899] k8s.go 500: Wrote updated endpoint to datastore ContainerID="9dc32a33ce8f9bfe77e542d6efef5d1bb3712e061549c74093edcd3560aaf267" Namespace="calico-system" Pod="csi-node-driver-wkgcv" WorkloadEndpoint="ci--3975.2.1--5--b3ba9b7107-k8s-csi--node--driver--wkgcv-eth0" Sep 4 20:29:25.738412 containerd[1468]: time="2024-09-04T20:29:25.737458973Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 20:29:25.738412 containerd[1468]: time="2024-09-04T20:29:25.737521193Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 20:29:25.738412 containerd[1468]: time="2024-09-04T20:29:25.737536834Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 20:29:25.738412 containerd[1468]: time="2024-09-04T20:29:25.737546666Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 20:29:25.773335 systemd[1]: Started cri-containerd-9dc32a33ce8f9bfe77e542d6efef5d1bb3712e061549c74093edcd3560aaf267.scope - libcontainer container 9dc32a33ce8f9bfe77e542d6efef5d1bb3712e061549c74093edcd3560aaf267. Sep 4 20:29:25.806562 containerd[1468]: time="2024-09-04T20:29:25.806500305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wkgcv,Uid:19278f8b-d3ea-467e-a88b-64888b0edecc,Namespace:calico-system,Attempt:1,} returns sandbox id \"9dc32a33ce8f9bfe77e542d6efef5d1bb3712e061549c74093edcd3560aaf267\"" Sep 4 20:29:26.220383 systemd-networkd[1370]: califec74167e17: Gained IPv6LL Sep 4 20:29:27.364259 containerd[1468]: time="2024-09-04T20:29:27.362695056Z" level=info msg="StopPodSandbox for \"c20c5b63e11f42c5f851e1860eff3f0d0e4e32aa5c6cd2761dd97b96c0be2b58\"" Sep 4 20:29:27.374898 containerd[1468]: time="2024-09-04T20:29:27.374285071Z" level=info msg="StopPodSandbox for \"325efa9ce30fd3869d142cc34879fb6ef9d38b4f2e106555f839fca57f5b4a31\"" Sep 4 20:29:27.568031 systemd[1]: Started sshd@10-143.198.146.52:22-139.178.68.195:35996.service - OpenSSH per-connection server daemon (139.178.68.195:35996). Sep 4 20:29:27.629307 systemd-networkd[1370]: calif8f6a08191f: Gained IPv6LL Sep 4 20:29:27.651147 containerd[1468]: 2024-09-04 20:29:27.532 [INFO][4009] k8s.go 608: Cleaning up netns ContainerID="325efa9ce30fd3869d142cc34879fb6ef9d38b4f2e106555f839fca57f5b4a31" Sep 4 20:29:27.651147 containerd[1468]: 2024-09-04 20:29:27.533 [INFO][4009] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="325efa9ce30fd3869d142cc34879fb6ef9d38b4f2e106555f839fca57f5b4a31" iface="eth0" netns="/var/run/netns/cni-2b9200c1-7652-585e-9245-3e72617f0c89" Sep 4 20:29:27.651147 containerd[1468]: 2024-09-04 20:29:27.534 [INFO][4009] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="325efa9ce30fd3869d142cc34879fb6ef9d38b4f2e106555f839fca57f5b4a31" iface="eth0" netns="/var/run/netns/cni-2b9200c1-7652-585e-9245-3e72617f0c89" Sep 4 20:29:27.651147 containerd[1468]: 2024-09-04 20:29:27.536 [INFO][4009] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="325efa9ce30fd3869d142cc34879fb6ef9d38b4f2e106555f839fca57f5b4a31" iface="eth0" netns="/var/run/netns/cni-2b9200c1-7652-585e-9245-3e72617f0c89" Sep 4 20:29:27.651147 containerd[1468]: 2024-09-04 20:29:27.536 [INFO][4009] k8s.go 615: Releasing IP address(es) ContainerID="325efa9ce30fd3869d142cc34879fb6ef9d38b4f2e106555f839fca57f5b4a31" Sep 4 20:29:27.651147 containerd[1468]: 2024-09-04 20:29:27.536 [INFO][4009] utils.go 188: Calico CNI releasing IP address ContainerID="325efa9ce30fd3869d142cc34879fb6ef9d38b4f2e106555f839fca57f5b4a31" Sep 4 20:29:27.651147 containerd[1468]: 2024-09-04 20:29:27.612 [INFO][4023] ipam_plugin.go 417: Releasing address using handleID ContainerID="325efa9ce30fd3869d142cc34879fb6ef9d38b4f2e106555f839fca57f5b4a31" HandleID="k8s-pod-network.325efa9ce30fd3869d142cc34879fb6ef9d38b4f2e106555f839fca57f5b4a31" Workload="ci--3975.2.1--5--b3ba9b7107-k8s-coredns--5dd5756b68--966rt-eth0" Sep 4 20:29:27.651147 containerd[1468]: 2024-09-04 20:29:27.612 [INFO][4023] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 20:29:27.651147 containerd[1468]: 2024-09-04 20:29:27.612 [INFO][4023] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 20:29:27.651147 containerd[1468]: 2024-09-04 20:29:27.622 [WARNING][4023] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="325efa9ce30fd3869d142cc34879fb6ef9d38b4f2e106555f839fca57f5b4a31" HandleID="k8s-pod-network.325efa9ce30fd3869d142cc34879fb6ef9d38b4f2e106555f839fca57f5b4a31" Workload="ci--3975.2.1--5--b3ba9b7107-k8s-coredns--5dd5756b68--966rt-eth0" Sep 4 20:29:27.651147 containerd[1468]: 2024-09-04 20:29:27.623 [INFO][4023] ipam_plugin.go 445: Releasing address using workloadID ContainerID="325efa9ce30fd3869d142cc34879fb6ef9d38b4f2e106555f839fca57f5b4a31" HandleID="k8s-pod-network.325efa9ce30fd3869d142cc34879fb6ef9d38b4f2e106555f839fca57f5b4a31" Workload="ci--3975.2.1--5--b3ba9b7107-k8s-coredns--5dd5756b68--966rt-eth0" Sep 4 20:29:27.651147 containerd[1468]: 2024-09-04 20:29:27.634 [INFO][4023] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 20:29:27.651147 containerd[1468]: 2024-09-04 20:29:27.645 [INFO][4009] k8s.go 621: Teardown processing complete. ContainerID="325efa9ce30fd3869d142cc34879fb6ef9d38b4f2e106555f839fca57f5b4a31" Sep 4 20:29:27.651147 containerd[1468]: time="2024-09-04T20:29:27.650958181Z" level=info msg="TearDown network for sandbox \"325efa9ce30fd3869d142cc34879fb6ef9d38b4f2e106555f839fca57f5b4a31\" successfully" Sep 4 20:29:27.651147 containerd[1468]: time="2024-09-04T20:29:27.650988841Z" level=info msg="StopPodSandbox for \"325efa9ce30fd3869d142cc34879fb6ef9d38b4f2e106555f839fca57f5b4a31\" returns successfully" Sep 4 20:29:27.656092 kubelet[2523]: E0904 20:29:27.654682 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 20:29:27.661738 containerd[1468]: time="2024-09-04T20:29:27.661679558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-966rt,Uid:f15b71f8-033b-4890-a1b4-2a97c47ca461,Namespace:kube-system,Attempt:1,}" Sep 4 20:29:27.665363 systemd[1]: run-netns-cni\x2d2b9200c1\x2d7652\x2d585e\x2d9245\x2d3e72617f0c89.mount: Deactivated successfully. Sep 4 20:29:27.695543 containerd[1468]: 2024-09-04 20:29:27.508 [INFO][4001] k8s.go 608: Cleaning up netns ContainerID="c20c5b63e11f42c5f851e1860eff3f0d0e4e32aa5c6cd2761dd97b96c0be2b58" Sep 4 20:29:27.695543 containerd[1468]: 2024-09-04 20:29:27.509 [INFO][4001] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="c20c5b63e11f42c5f851e1860eff3f0d0e4e32aa5c6cd2761dd97b96c0be2b58" iface="eth0" netns="/var/run/netns/cni-aadd9bca-31ce-cf56-99a4-a93d6c8fc3fc" Sep 4 20:29:27.695543 containerd[1468]: 2024-09-04 20:29:27.515 [INFO][4001] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="c20c5b63e11f42c5f851e1860eff3f0d0e4e32aa5c6cd2761dd97b96c0be2b58" iface="eth0" netns="/var/run/netns/cni-aadd9bca-31ce-cf56-99a4-a93d6c8fc3fc" Sep 4 20:29:27.695543 containerd[1468]: 2024-09-04 20:29:27.516 [INFO][4001] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="c20c5b63e11f42c5f851e1860eff3f0d0e4e32aa5c6cd2761dd97b96c0be2b58" iface="eth0" netns="/var/run/netns/cni-aadd9bca-31ce-cf56-99a4-a93d6c8fc3fc" Sep 4 20:29:27.695543 containerd[1468]: 2024-09-04 20:29:27.516 [INFO][4001] k8s.go 615: Releasing IP address(es) ContainerID="c20c5b63e11f42c5f851e1860eff3f0d0e4e32aa5c6cd2761dd97b96c0be2b58" Sep 4 20:29:27.695543 containerd[1468]: 2024-09-04 20:29:27.516 [INFO][4001] utils.go 188: Calico CNI releasing IP address ContainerID="c20c5b63e11f42c5f851e1860eff3f0d0e4e32aa5c6cd2761dd97b96c0be2b58" Sep 4 20:29:27.695543 containerd[1468]: 2024-09-04 20:29:27.630 [INFO][4018] ipam_plugin.go 417: Releasing address using handleID ContainerID="c20c5b63e11f42c5f851e1860eff3f0d0e4e32aa5c6cd2761dd97b96c0be2b58" HandleID="k8s-pod-network.c20c5b63e11f42c5f851e1860eff3f0d0e4e32aa5c6cd2761dd97b96c0be2b58" Workload="ci--3975.2.1--5--b3ba9b7107-k8s-coredns--5dd5756b68--fk4tb-eth0" Sep 4 20:29:27.695543 containerd[1468]: 2024-09-04 20:29:27.631 [INFO][4018] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 20:29:27.695543 containerd[1468]: 2024-09-04 20:29:27.640 [INFO][4018] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 20:29:27.695543 containerd[1468]: 2024-09-04 20:29:27.671 [WARNING][4018] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="c20c5b63e11f42c5f851e1860eff3f0d0e4e32aa5c6cd2761dd97b96c0be2b58" HandleID="k8s-pod-network.c20c5b63e11f42c5f851e1860eff3f0d0e4e32aa5c6cd2761dd97b96c0be2b58" Workload="ci--3975.2.1--5--b3ba9b7107-k8s-coredns--5dd5756b68--fk4tb-eth0" Sep 4 20:29:27.695543 containerd[1468]: 2024-09-04 20:29:27.672 [INFO][4018] ipam_plugin.go 445: Releasing address using workloadID ContainerID="c20c5b63e11f42c5f851e1860eff3f0d0e4e32aa5c6cd2761dd97b96c0be2b58" HandleID="k8s-pod-network.c20c5b63e11f42c5f851e1860eff3f0d0e4e32aa5c6cd2761dd97b96c0be2b58" Workload="ci--3975.2.1--5--b3ba9b7107-k8s-coredns--5dd5756b68--fk4tb-eth0" Sep 4 20:29:27.695543 containerd[1468]: 2024-09-04 20:29:27.678 [INFO][4018] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 20:29:27.695543 containerd[1468]: 2024-09-04 20:29:27.682 [INFO][4001] k8s.go 621: Teardown processing complete. ContainerID="c20c5b63e11f42c5f851e1860eff3f0d0e4e32aa5c6cd2761dd97b96c0be2b58" Sep 4 20:29:27.697773 containerd[1468]: time="2024-09-04T20:29:27.697429838Z" level=info msg="TearDown network for sandbox \"c20c5b63e11f42c5f851e1860eff3f0d0e4e32aa5c6cd2761dd97b96c0be2b58\" successfully" Sep 4 20:29:27.697773 containerd[1468]: time="2024-09-04T20:29:27.697466911Z" level=info msg="StopPodSandbox for \"c20c5b63e11f42c5f851e1860eff3f0d0e4e32aa5c6cd2761dd97b96c0be2b58\" returns successfully" Sep 4 20:29:27.705344 kubelet[2523]: E0904 20:29:27.704747 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 20:29:27.720923 systemd[1]: run-netns-cni\x2daadd9bca\x2d31ce\x2dcf56\x2d99a4\x2da93d6c8fc3fc.mount: Deactivated successfully. Sep 4 20:29:27.721639 containerd[1468]: time="2024-09-04T20:29:27.721588016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-fk4tb,Uid:f347db77-68c1-4005-a444-424ceab37966,Namespace:kube-system,Attempt:1,}" Sep 4 20:29:27.763737 sshd[4029]: Accepted publickey for core from 139.178.68.195 port 35996 ssh2: RSA SHA256:6m86ErQYPfwi49NZRVftW/USO9k3FxgPtHd71f+HMpY Sep 4 20:29:27.769133 sshd[4029]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 20:29:27.784678 systemd-logind[1447]: New session 11 of user core. Sep 4 20:29:27.791401 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 4 20:29:28.184235 systemd-networkd[1370]: cali496d6bea1b3: Link UP Sep 4 20:29:28.184604 systemd-networkd[1370]: cali496d6bea1b3: Gained carrier Sep 4 20:29:28.248874 containerd[1468]: 2024-09-04 20:29:27.942 [INFO][4047] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.2.1--5--b3ba9b7107-k8s-coredns--5dd5756b68--fk4tb-eth0 coredns-5dd5756b68- kube-system f347db77-68c1-4005-a444-424ceab37966 826 0 2024-09-04 20:28:53 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3975.2.1-5-b3ba9b7107 coredns-5dd5756b68-fk4tb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali496d6bea1b3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="c7b606b1cb58234bd709f60b39a0743f169667da1e0bde1b7da4ac77debeee45" Namespace="kube-system" Pod="coredns-5dd5756b68-fk4tb" WorkloadEndpoint="ci--3975.2.1--5--b3ba9b7107-k8s-coredns--5dd5756b68--fk4tb-" Sep 4 20:29:28.248874 containerd[1468]: 2024-09-04 20:29:27.942 [INFO][4047] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c7b606b1cb58234bd709f60b39a0743f169667da1e0bde1b7da4ac77debeee45" Namespace="kube-system" Pod="coredns-5dd5756b68-fk4tb" WorkloadEndpoint="ci--3975.2.1--5--b3ba9b7107-k8s-coredns--5dd5756b68--fk4tb-eth0" Sep 4 20:29:28.248874 containerd[1468]: 2024-09-04 20:29:28.032 [INFO][4069] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c7b606b1cb58234bd709f60b39a0743f169667da1e0bde1b7da4ac77debeee45" HandleID="k8s-pod-network.c7b606b1cb58234bd709f60b39a0743f169667da1e0bde1b7da4ac77debeee45" Workload="ci--3975.2.1--5--b3ba9b7107-k8s-coredns--5dd5756b68--fk4tb-eth0" Sep 4 20:29:28.248874 containerd[1468]: 2024-09-04 20:29:28.056 [INFO][4069] ipam_plugin.go 270: Auto assigning IP ContainerID="c7b606b1cb58234bd709f60b39a0743f169667da1e0bde1b7da4ac77debeee45" HandleID="k8s-pod-network.c7b606b1cb58234bd709f60b39a0743f169667da1e0bde1b7da4ac77debeee45" Workload="ci--3975.2.1--5--b3ba9b7107-k8s-coredns--5dd5756b68--fk4tb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318210), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3975.2.1-5-b3ba9b7107", "pod":"coredns-5dd5756b68-fk4tb", "timestamp":"2024-09-04 20:29:28.032972577 +0000 UTC"}, Hostname:"ci-3975.2.1-5-b3ba9b7107", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 20:29:28.248874 containerd[1468]: 2024-09-04 20:29:28.058 [INFO][4069] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 20:29:28.248874 containerd[1468]: 2024-09-04 20:29:28.058 [INFO][4069] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 20:29:28.248874 containerd[1468]: 2024-09-04 20:29:28.060 [INFO][4069] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.2.1-5-b3ba9b7107' Sep 4 20:29:28.248874 containerd[1468]: 2024-09-04 20:29:28.068 [INFO][4069] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c7b606b1cb58234bd709f60b39a0743f169667da1e0bde1b7da4ac77debeee45" host="ci-3975.2.1-5-b3ba9b7107" Sep 4 20:29:28.248874 containerd[1468]: 2024-09-04 20:29:28.081 [INFO][4069] ipam.go 372: Looking up existing affinities for host host="ci-3975.2.1-5-b3ba9b7107" Sep 4 20:29:28.248874 containerd[1468]: 2024-09-04 20:29:28.102 [INFO][4069] ipam.go 489: Trying affinity for 192.168.6.0/26 host="ci-3975.2.1-5-b3ba9b7107" Sep 4 20:29:28.248874 containerd[1468]: 2024-09-04 20:29:28.110 [INFO][4069] ipam.go 155: Attempting to load block cidr=192.168.6.0/26 host="ci-3975.2.1-5-b3ba9b7107" Sep 4 20:29:28.248874 containerd[1468]: 2024-09-04 20:29:28.120 [INFO][4069] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.6.0/26 host="ci-3975.2.1-5-b3ba9b7107" Sep 4 20:29:28.248874 containerd[1468]: 2024-09-04 20:29:28.120 [INFO][4069] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.6.0/26 handle="k8s-pod-network.c7b606b1cb58234bd709f60b39a0743f169667da1e0bde1b7da4ac77debeee45" host="ci-3975.2.1-5-b3ba9b7107" Sep 4 20:29:28.248874 containerd[1468]: 2024-09-04 20:29:28.123 [INFO][4069] ipam.go 1685: Creating new handle: k8s-pod-network.c7b606b1cb58234bd709f60b39a0743f169667da1e0bde1b7da4ac77debeee45 Sep 4 20:29:28.248874 containerd[1468]: 2024-09-04 20:29:28.143 [INFO][4069] ipam.go 1203: Writing block in order to claim IPs block=192.168.6.0/26 handle="k8s-pod-network.c7b606b1cb58234bd709f60b39a0743f169667da1e0bde1b7da4ac77debeee45" host="ci-3975.2.1-5-b3ba9b7107" Sep 4 20:29:28.248874 containerd[1468]: 2024-09-04 20:29:28.159 [INFO][4069] ipam.go 1216: Successfully claimed IPs: [192.168.6.3/26] block=192.168.6.0/26 handle="k8s-pod-network.c7b606b1cb58234bd709f60b39a0743f169667da1e0bde1b7da4ac77debeee45" host="ci-3975.2.1-5-b3ba9b7107" Sep 4 20:29:28.248874 containerd[1468]: 2024-09-04 20:29:28.159 [INFO][4069] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.6.3/26] handle="k8s-pod-network.c7b606b1cb58234bd709f60b39a0743f169667da1e0bde1b7da4ac77debeee45" host="ci-3975.2.1-5-b3ba9b7107" Sep 4 20:29:28.248874 containerd[1468]: 2024-09-04 20:29:28.161 [INFO][4069] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 20:29:28.248874 containerd[1468]: 2024-09-04 20:29:28.161 [INFO][4069] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.6.3/26] IPv6=[] ContainerID="c7b606b1cb58234bd709f60b39a0743f169667da1e0bde1b7da4ac77debeee45" HandleID="k8s-pod-network.c7b606b1cb58234bd709f60b39a0743f169667da1e0bde1b7da4ac77debeee45" Workload="ci--3975.2.1--5--b3ba9b7107-k8s-coredns--5dd5756b68--fk4tb-eth0" Sep 4 20:29:28.252491 containerd[1468]: 2024-09-04 20:29:28.177 [INFO][4047] k8s.go 386: Populated endpoint ContainerID="c7b606b1cb58234bd709f60b39a0743f169667da1e0bde1b7da4ac77debeee45" Namespace="kube-system" Pod="coredns-5dd5756b68-fk4tb" WorkloadEndpoint="ci--3975.2.1--5--b3ba9b7107-k8s-coredns--5dd5756b68--fk4tb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--5--b3ba9b7107-k8s-coredns--5dd5756b68--fk4tb-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"f347db77-68c1-4005-a444-424ceab37966", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 20, 28, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-5-b3ba9b7107", ContainerID:"", Pod:"coredns-5dd5756b68-fk4tb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.6.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali496d6bea1b3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 20:29:28.252491 containerd[1468]: 2024-09-04 20:29:28.177 [INFO][4047] k8s.go 387: Calico CNI using IPs: [192.168.6.3/32] ContainerID="c7b606b1cb58234bd709f60b39a0743f169667da1e0bde1b7da4ac77debeee45" Namespace="kube-system" Pod="coredns-5dd5756b68-fk4tb" WorkloadEndpoint="ci--3975.2.1--5--b3ba9b7107-k8s-coredns--5dd5756b68--fk4tb-eth0" Sep 4 20:29:28.252491 containerd[1468]: 2024-09-04 20:29:28.177 [INFO][4047] dataplane_linux.go 68: Setting the host side veth name to cali496d6bea1b3 ContainerID="c7b606b1cb58234bd709f60b39a0743f169667da1e0bde1b7da4ac77debeee45" Namespace="kube-system" Pod="coredns-5dd5756b68-fk4tb" WorkloadEndpoint="ci--3975.2.1--5--b3ba9b7107-k8s-coredns--5dd5756b68--fk4tb-eth0" Sep 4 20:29:28.252491 containerd[1468]: 2024-09-04 20:29:28.182 [INFO][4047] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="c7b606b1cb58234bd709f60b39a0743f169667da1e0bde1b7da4ac77debeee45" Namespace="kube-system" Pod="coredns-5dd5756b68-fk4tb" WorkloadEndpoint="ci--3975.2.1--5--b3ba9b7107-k8s-coredns--5dd5756b68--fk4tb-eth0" Sep 4 20:29:28.252491 containerd[1468]: 2024-09-04 20:29:28.182 [INFO][4047] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c7b606b1cb58234bd709f60b39a0743f169667da1e0bde1b7da4ac77debeee45" Namespace="kube-system" Pod="coredns-5dd5756b68-fk4tb" WorkloadEndpoint="ci--3975.2.1--5--b3ba9b7107-k8s-coredns--5dd5756b68--fk4tb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--5--b3ba9b7107-k8s-coredns--5dd5756b68--fk4tb-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"f347db77-68c1-4005-a444-424ceab37966", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 20, 28, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-5-b3ba9b7107", ContainerID:"c7b606b1cb58234bd709f60b39a0743f169667da1e0bde1b7da4ac77debeee45", Pod:"coredns-5dd5756b68-fk4tb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.6.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali496d6bea1b3", MAC:"8e:93:f6:31:cf:7a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 20:29:28.252491 containerd[1468]: 2024-09-04 20:29:28.228 [INFO][4047] k8s.go 500: Wrote updated endpoint to datastore ContainerID="c7b606b1cb58234bd709f60b39a0743f169667da1e0bde1b7da4ac77debeee45" Namespace="kube-system" Pod="coredns-5dd5756b68-fk4tb" WorkloadEndpoint="ci--3975.2.1--5--b3ba9b7107-k8s-coredns--5dd5756b68--fk4tb-eth0" Sep 4 20:29:28.328691 systemd-networkd[1370]: calif5e08b3f82b: Link UP Sep 4 20:29:28.328855 systemd-networkd[1370]: calif5e08b3f82b: Gained carrier Sep 4 20:29:28.370712 containerd[1468]: 2024-09-04 20:29:27.856 [INFO][4035] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.2.1--5--b3ba9b7107-k8s-coredns--5dd5756b68--966rt-eth0 coredns-5dd5756b68- kube-system f15b71f8-033b-4890-a1b4-2a97c47ca461 827 0 2024-09-04 20:28:53 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3975.2.1-5-b3ba9b7107 coredns-5dd5756b68-966rt eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif5e08b3f82b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="44c2399eba62b27b0b449a89574db37f1f2545c3f9e3f28a435c3df59630ff7c" Namespace="kube-system" Pod="coredns-5dd5756b68-966rt" WorkloadEndpoint="ci--3975.2.1--5--b3ba9b7107-k8s-coredns--5dd5756b68--966rt-" Sep 4 20:29:28.370712 containerd[1468]: 2024-09-04 20:29:27.856 [INFO][4035] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="44c2399eba62b27b0b449a89574db37f1f2545c3f9e3f28a435c3df59630ff7c" Namespace="kube-system" Pod="coredns-5dd5756b68-966rt" WorkloadEndpoint="ci--3975.2.1--5--b3ba9b7107-k8s-coredns--5dd5756b68--966rt-eth0" Sep 4 20:29:28.370712 containerd[1468]: 2024-09-04 20:29:28.028 [INFO][4060] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="44c2399eba62b27b0b449a89574db37f1f2545c3f9e3f28a435c3df59630ff7c" HandleID="k8s-pod-network.44c2399eba62b27b0b449a89574db37f1f2545c3f9e3f28a435c3df59630ff7c" Workload="ci--3975.2.1--5--b3ba9b7107-k8s-coredns--5dd5756b68--966rt-eth0" Sep 4 20:29:28.370712 containerd[1468]: 2024-09-04 20:29:28.058 [INFO][4060] ipam_plugin.go 270: Auto assigning IP ContainerID="44c2399eba62b27b0b449a89574db37f1f2545c3f9e3f28a435c3df59630ff7c" HandleID="k8s-pod-network.44c2399eba62b27b0b449a89574db37f1f2545c3f9e3f28a435c3df59630ff7c" Workload="ci--3975.2.1--5--b3ba9b7107-k8s-coredns--5dd5756b68--966rt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000364300), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3975.2.1-5-b3ba9b7107", "pod":"coredns-5dd5756b68-966rt", "timestamp":"2024-09-04 20:29:28.028582689 +0000 UTC"}, Hostname:"ci-3975.2.1-5-b3ba9b7107", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 20:29:28.370712 containerd[1468]: 2024-09-04 20:29:28.060 [INFO][4060] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 20:29:28.370712 containerd[1468]: 2024-09-04 20:29:28.159 [INFO][4060] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 20:29:28.370712 containerd[1468]: 2024-09-04 20:29:28.162 [INFO][4060] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.2.1-5-b3ba9b7107' Sep 4 20:29:28.370712 containerd[1468]: 2024-09-04 20:29:28.169 [INFO][4060] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.44c2399eba62b27b0b449a89574db37f1f2545c3f9e3f28a435c3df59630ff7c" host="ci-3975.2.1-5-b3ba9b7107" Sep 4 20:29:28.370712 containerd[1468]: 2024-09-04 20:29:28.187 [INFO][4060] ipam.go 372: Looking up existing affinities for host host="ci-3975.2.1-5-b3ba9b7107" Sep 4 20:29:28.370712 containerd[1468]: 2024-09-04 20:29:28.199 [INFO][4060] ipam.go 489: Trying affinity for 192.168.6.0/26 host="ci-3975.2.1-5-b3ba9b7107" Sep 4 20:29:28.370712 containerd[1468]: 2024-09-04 20:29:28.207 [INFO][4060] ipam.go 155: Attempting to load block cidr=192.168.6.0/26 host="ci-3975.2.1-5-b3ba9b7107" Sep 4 20:29:28.370712 containerd[1468]: 2024-09-04 20:29:28.246 [INFO][4060] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.6.0/26 host="ci-3975.2.1-5-b3ba9b7107" Sep 4 20:29:28.370712 containerd[1468]: 2024-09-04 20:29:28.246 [INFO][4060] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.6.0/26 handle="k8s-pod-network.44c2399eba62b27b0b449a89574db37f1f2545c3f9e3f28a435c3df59630ff7c" host="ci-3975.2.1-5-b3ba9b7107" Sep 4 20:29:28.370712 containerd[1468]: 2024-09-04 20:29:28.267 [INFO][4060] ipam.go 1685: Creating new handle: k8s-pod-network.44c2399eba62b27b0b449a89574db37f1f2545c3f9e3f28a435c3df59630ff7c Sep 4 20:29:28.370712 containerd[1468]: 2024-09-04 20:29:28.279 [INFO][4060] ipam.go 1203: Writing block in order to claim IPs block=192.168.6.0/26 handle="k8s-pod-network.44c2399eba62b27b0b449a89574db37f1f2545c3f9e3f28a435c3df59630ff7c" host="ci-3975.2.1-5-b3ba9b7107" Sep 4 20:29:28.370712 containerd[1468]: 2024-09-04 20:29:28.300 [INFO][4060] ipam.go 1216: Successfully claimed IPs: [192.168.6.4/26] block=192.168.6.0/26 handle="k8s-pod-network.44c2399eba62b27b0b449a89574db37f1f2545c3f9e3f28a435c3df59630ff7c" host="ci-3975.2.1-5-b3ba9b7107" Sep 4 20:29:28.370712 containerd[1468]: 2024-09-04 20:29:28.300 [INFO][4060] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.6.4/26] handle="k8s-pod-network.44c2399eba62b27b0b449a89574db37f1f2545c3f9e3f28a435c3df59630ff7c" host="ci-3975.2.1-5-b3ba9b7107" Sep 4 20:29:28.370712 containerd[1468]: 2024-09-04 20:29:28.300 [INFO][4060] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 20:29:28.370712 containerd[1468]: 2024-09-04 20:29:28.300 [INFO][4060] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.6.4/26] IPv6=[] ContainerID="44c2399eba62b27b0b449a89574db37f1f2545c3f9e3f28a435c3df59630ff7c" HandleID="k8s-pod-network.44c2399eba62b27b0b449a89574db37f1f2545c3f9e3f28a435c3df59630ff7c" Workload="ci--3975.2.1--5--b3ba9b7107-k8s-coredns--5dd5756b68--966rt-eth0" Sep 4 20:29:28.373774 containerd[1468]: 2024-09-04 20:29:28.318 [INFO][4035] k8s.go 386: Populated endpoint ContainerID="44c2399eba62b27b0b449a89574db37f1f2545c3f9e3f28a435c3df59630ff7c" Namespace="kube-system" Pod="coredns-5dd5756b68-966rt" WorkloadEndpoint="ci--3975.2.1--5--b3ba9b7107-k8s-coredns--5dd5756b68--966rt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--5--b3ba9b7107-k8s-coredns--5dd5756b68--966rt-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"f15b71f8-033b-4890-a1b4-2a97c47ca461", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 20, 28, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-5-b3ba9b7107", ContainerID:"", Pod:"coredns-5dd5756b68-966rt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.6.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif5e08b3f82b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 20:29:28.373774 containerd[1468]: 2024-09-04 20:29:28.318 [INFO][4035] k8s.go 387: Calico CNI using IPs: [192.168.6.4/32] ContainerID="44c2399eba62b27b0b449a89574db37f1f2545c3f9e3f28a435c3df59630ff7c" Namespace="kube-system" Pod="coredns-5dd5756b68-966rt" WorkloadEndpoint="ci--3975.2.1--5--b3ba9b7107-k8s-coredns--5dd5756b68--966rt-eth0" Sep 4 20:29:28.373774 containerd[1468]: 2024-09-04 20:29:28.318 [INFO][4035] dataplane_linux.go 68: Setting the host side veth name to calif5e08b3f82b ContainerID="44c2399eba62b27b0b449a89574db37f1f2545c3f9e3f28a435c3df59630ff7c" Namespace="kube-system" Pod="coredns-5dd5756b68-966rt" WorkloadEndpoint="ci--3975.2.1--5--b3ba9b7107-k8s-coredns--5dd5756b68--966rt-eth0" Sep 4 20:29:28.373774 containerd[1468]: 2024-09-04 20:29:28.326 [INFO][4035] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="44c2399eba62b27b0b449a89574db37f1f2545c3f9e3f28a435c3df59630ff7c" Namespace="kube-system" Pod="coredns-5dd5756b68-966rt" WorkloadEndpoint="ci--3975.2.1--5--b3ba9b7107-k8s-coredns--5dd5756b68--966rt-eth0" Sep 4 20:29:28.373774 containerd[1468]: 2024-09-04 20:29:28.327 [INFO][4035] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="44c2399eba62b27b0b449a89574db37f1f2545c3f9e3f28a435c3df59630ff7c" Namespace="kube-system" Pod="coredns-5dd5756b68-966rt" WorkloadEndpoint="ci--3975.2.1--5--b3ba9b7107-k8s-coredns--5dd5756b68--966rt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--5--b3ba9b7107-k8s-coredns--5dd5756b68--966rt-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"f15b71f8-033b-4890-a1b4-2a97c47ca461", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 20, 28, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-5-b3ba9b7107", ContainerID:"44c2399eba62b27b0b449a89574db37f1f2545c3f9e3f28a435c3df59630ff7c", Pod:"coredns-5dd5756b68-966rt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.6.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif5e08b3f82b", MAC:"36:2d:6f:34:a4:a2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 20:29:28.373774 containerd[1468]: 2024-09-04 20:29:28.348 [INFO][4035] k8s.go 500: Wrote updated endpoint to datastore ContainerID="44c2399eba62b27b0b449a89574db37f1f2545c3f9e3f28a435c3df59630ff7c" Namespace="kube-system" Pod="coredns-5dd5756b68-966rt" WorkloadEndpoint="ci--3975.2.1--5--b3ba9b7107-k8s-coredns--5dd5756b68--966rt-eth0" Sep 4 20:29:28.435993 containerd[1468]: time="2024-09-04T20:29:28.433892164Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 20:29:28.435993 containerd[1468]: time="2024-09-04T20:29:28.433999123Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 20:29:28.435993 containerd[1468]: time="2024-09-04T20:29:28.434021567Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 20:29:28.435993 containerd[1468]: time="2024-09-04T20:29:28.434035646Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 20:29:28.454603 sshd[4029]: pam_unix(sshd:session): session closed for user core Sep 4 20:29:28.465850 systemd[1]: sshd@10-143.198.146.52:22-139.178.68.195:35996.service: Deactivated successfully. Sep 4 20:29:28.471762 systemd[1]: session-11.scope: Deactivated successfully. Sep 4 20:29:28.474350 systemd-logind[1447]: Session 11 logged out. Waiting for processes to exit. Sep 4 20:29:28.476103 containerd[1468]: time="2024-09-04T20:29:28.473814904Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 20:29:28.476103 containerd[1468]: time="2024-09-04T20:29:28.473890404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 20:29:28.476103 containerd[1468]: time="2024-09-04T20:29:28.473920669Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 20:29:28.476103 containerd[1468]: time="2024-09-04T20:29:28.473935297Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 20:29:28.476658 systemd-logind[1447]: Removed session 11. Sep 4 20:29:28.499787 systemd[1]: Started cri-containerd-c7b606b1cb58234bd709f60b39a0743f169667da1e0bde1b7da4ac77debeee45.scope - libcontainer container c7b606b1cb58234bd709f60b39a0743f169667da1e0bde1b7da4ac77debeee45. Sep 4 20:29:28.551546 systemd[1]: Started cri-containerd-44c2399eba62b27b0b449a89574db37f1f2545c3f9e3f28a435c3df59630ff7c.scope - libcontainer container 44c2399eba62b27b0b449a89574db37f1f2545c3f9e3f28a435c3df59630ff7c. Sep 4 20:29:28.671167 containerd[1468]: time="2024-09-04T20:29:28.668878046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-966rt,Uid:f15b71f8-033b-4890-a1b4-2a97c47ca461,Namespace:kube-system,Attempt:1,} returns sandbox id \"44c2399eba62b27b0b449a89574db37f1f2545c3f9e3f28a435c3df59630ff7c\"" Sep 4 20:29:28.671354 kubelet[2523]: E0904 20:29:28.670905 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 20:29:28.672256 containerd[1468]: time="2024-09-04T20:29:28.672211348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-fk4tb,Uid:f347db77-68c1-4005-a444-424ceab37966,Namespace:kube-system,Attempt:1,} returns sandbox id \"c7b606b1cb58234bd709f60b39a0743f169667da1e0bde1b7da4ac77debeee45\"" Sep 4 20:29:28.679392 kubelet[2523]: E0904 20:29:28.679326 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 20:29:28.685008 containerd[1468]: time="2024-09-04T20:29:28.684943109Z" level=info msg="CreateContainer within sandbox \"44c2399eba62b27b0b449a89574db37f1f2545c3f9e3f28a435c3df59630ff7c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 20:29:28.691122 containerd[1468]: time="2024-09-04T20:29:28.690973827Z" level=info msg="CreateContainer within sandbox \"c7b606b1cb58234bd709f60b39a0743f169667da1e0bde1b7da4ac77debeee45\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 20:29:28.720215 containerd[1468]: time="2024-09-04T20:29:28.715543157Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:29:28.720215 containerd[1468]: time="2024-09-04T20:29:28.717686282Z" level=info msg="CreateContainer within sandbox \"44c2399eba62b27b0b449a89574db37f1f2545c3f9e3f28a435c3df59630ff7c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c044459ffdfb3c9d8613fc6d2562c701da53fbacccb7a0745c8ed5a02d7612c0\"" Sep 4 20:29:28.722454 containerd[1468]: time="2024-09-04T20:29:28.722407681Z" level=info msg="StartContainer for \"c044459ffdfb3c9d8613fc6d2562c701da53fbacccb7a0745c8ed5a02d7612c0\"" Sep 4 20:29:28.728381 containerd[1468]: time="2024-09-04T20:29:28.727556612Z" level=info msg="CreateContainer within sandbox \"c7b606b1cb58234bd709f60b39a0743f169667da1e0bde1b7da4ac77debeee45\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b79db75ebd7e11de9f9918f8a3e2cd0507064f8e981a07f0832227c46858c7db\"" Sep 4 20:29:28.731097 containerd[1468]: time="2024-09-04T20:29:28.729158543Z" level=info msg="ImageCreate event name:\"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:29:28.731097 containerd[1468]: time="2024-09-04T20:29:28.729612121Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.1: active requests=0, bytes read=33507125" Sep 4 20:29:28.737123 containerd[1468]: time="2024-09-04T20:29:28.737037953Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:29:28.769545 containerd[1468]: time="2024-09-04T20:29:28.737586457Z" level=info msg="StartContainer for \"b79db75ebd7e11de9f9918f8a3e2cd0507064f8e981a07f0832227c46858c7db\"" Sep 4 20:29:28.772557 systemd[1]: Started cri-containerd-c044459ffdfb3c9d8613fc6d2562c701da53fbacccb7a0745c8ed5a02d7612c0.scope - libcontainer container c044459ffdfb3c9d8613fc6d2562c701da53fbacccb7a0745c8ed5a02d7612c0. Sep 4 20:29:28.780211 containerd[1468]: time="2024-09-04T20:29:28.738133907Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" with image id \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\", size \"34999494\" in 3.713252057s" Sep 4 20:29:28.780700 containerd[1468]: time="2024-09-04T20:29:28.780666309Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" returns image reference \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\"" Sep 4 20:29:28.789454 containerd[1468]: time="2024-09-04T20:29:28.789341074Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\"" Sep 4 20:29:28.812370 containerd[1468]: time="2024-09-04T20:29:28.812288728Z" level=info msg="CreateContainer within sandbox \"23d93c9190d4e7d3be1864d02b137d148cc40d2471c8e2080a8e4a46770f424b\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 4 20:29:28.835504 containerd[1468]: time="2024-09-04T20:29:28.835447498Z" level=info msg="CreateContainer within sandbox \"23d93c9190d4e7d3be1864d02b137d148cc40d2471c8e2080a8e4a46770f424b\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"468b302734e18195ebe48bacb1f6be21a92bef9a6cebeb0d847df276275b448a\"" Sep 4 20:29:28.838532 containerd[1468]: time="2024-09-04T20:29:28.838169939Z" level=info msg="StartContainer for \"468b302734e18195ebe48bacb1f6be21a92bef9a6cebeb0d847df276275b448a\"" Sep 4 20:29:28.838318 systemd[1]: Started cri-containerd-b79db75ebd7e11de9f9918f8a3e2cd0507064f8e981a07f0832227c46858c7db.scope - libcontainer container b79db75ebd7e11de9f9918f8a3e2cd0507064f8e981a07f0832227c46858c7db. Sep 4 20:29:28.855217 containerd[1468]: time="2024-09-04T20:29:28.855160446Z" level=info msg="StartContainer for \"c044459ffdfb3c9d8613fc6d2562c701da53fbacccb7a0745c8ed5a02d7612c0\" returns successfully" Sep 4 20:29:28.921044 systemd[1]: Started cri-containerd-468b302734e18195ebe48bacb1f6be21a92bef9a6cebeb0d847df276275b448a.scope - libcontainer container 468b302734e18195ebe48bacb1f6be21a92bef9a6cebeb0d847df276275b448a. Sep 4 20:29:28.926725 containerd[1468]: time="2024-09-04T20:29:28.926529218Z" level=info msg="StartContainer for \"b79db75ebd7e11de9f9918f8a3e2cd0507064f8e981a07f0832227c46858c7db\" returns successfully" Sep 4 20:29:29.082977 containerd[1468]: time="2024-09-04T20:29:29.082524001Z" level=info msg="StartContainer for \"468b302734e18195ebe48bacb1f6be21a92bef9a6cebeb0d847df276275b448a\" returns successfully" Sep 4 20:29:29.420943 systemd-networkd[1370]: calif5e08b3f82b: Gained IPv6LL Sep 4 20:29:29.804441 systemd-networkd[1370]: cali496d6bea1b3: Gained IPv6LL Sep 4 20:29:29.817815 kubelet[2523]: E0904 20:29:29.817767 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 20:29:29.820721 kubelet[2523]: E0904 20:29:29.819740 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 20:29:29.873378 kubelet[2523]: I0904 20:29:29.873261 2523 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-fk4tb" podStartSLOduration=36.873189585 podCreationTimestamp="2024-09-04 20:28:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 20:29:29.840109995 +0000 UTC m=+50.683092080" watchObservedRunningTime="2024-09-04 20:29:29.873189585 +0000 UTC m=+50.716171661" Sep 4 20:29:29.874373 kubelet[2523]: I0904 20:29:29.874130 2523 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-966rt" podStartSLOduration=36.873799908 podCreationTimestamp="2024-09-04 20:28:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 20:29:29.868213255 +0000 UTC m=+50.711195339" watchObservedRunningTime="2024-09-04 20:29:29.873799908 +0000 UTC m=+50.716781989" Sep 4 20:29:29.908138 kubelet[2523]: I0904 20:29:29.907931 2523 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-d85dc74c8-bfxxg" podStartSLOduration=26.148758942 podCreationTimestamp="2024-09-04 20:29:00 +0000 UTC" firstStartedPulling="2024-09-04 20:29:25.02253263 +0000 UTC m=+45.865514688" lastFinishedPulling="2024-09-04 20:29:28.781642638 +0000 UTC m=+49.624624722" observedRunningTime="2024-09-04 20:29:29.907131609 +0000 UTC m=+50.750113705" watchObservedRunningTime="2024-09-04 20:29:29.907868976 +0000 UTC m=+50.750851067" Sep 4 20:29:30.332131 containerd[1468]: time="2024-09-04T20:29:30.332035487Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:29:30.333855 containerd[1468]: time="2024-09-04T20:29:30.333624401Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.1: active requests=0, bytes read=7642081" Sep 4 20:29:30.335403 containerd[1468]: time="2024-09-04T20:29:30.335318855Z" level=info msg="ImageCreate event name:\"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:29:30.339567 containerd[1468]: time="2024-09-04T20:29:30.339370438Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:29:30.340998 containerd[1468]: time="2024-09-04T20:29:30.340602182Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.1\" with image id \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\", size \"9134482\" in 1.550611477s" Sep 4 20:29:30.340998 containerd[1468]: time="2024-09-04T20:29:30.340672383Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\" returns image reference \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\"" Sep 4 20:29:30.346714 containerd[1468]: time="2024-09-04T20:29:30.346497047Z" level=info msg="CreateContainer within sandbox \"9dc32a33ce8f9bfe77e542d6efef5d1bb3712e061549c74093edcd3560aaf267\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 4 20:29:30.377672 containerd[1468]: time="2024-09-04T20:29:30.377477946Z" level=info msg="CreateContainer within sandbox \"9dc32a33ce8f9bfe77e542d6efef5d1bb3712e061549c74093edcd3560aaf267\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"61e4ea3a7b84996d7bf29593a1c8c1663055647242228762cfef8249a8d94f59\"" Sep 4 20:29:30.380555 containerd[1468]: time="2024-09-04T20:29:30.380497051Z" level=info msg="StartContainer for \"61e4ea3a7b84996d7bf29593a1c8c1663055647242228762cfef8249a8d94f59\"" Sep 4 20:29:30.431257 systemd[1]: Started cri-containerd-61e4ea3a7b84996d7bf29593a1c8c1663055647242228762cfef8249a8d94f59.scope - libcontainer container 61e4ea3a7b84996d7bf29593a1c8c1663055647242228762cfef8249a8d94f59. Sep 4 20:29:30.491831 containerd[1468]: time="2024-09-04T20:29:30.491758216Z" level=info msg="StartContainer for \"61e4ea3a7b84996d7bf29593a1c8c1663055647242228762cfef8249a8d94f59\" returns successfully" Sep 4 20:29:30.496213 containerd[1468]: time="2024-09-04T20:29:30.496141472Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\"" Sep 4 20:29:30.843889 kubelet[2523]: E0904 20:29:30.843739 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 20:29:30.845706 kubelet[2523]: E0904 20:29:30.844294 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 20:29:31.858852 kubelet[2523]: E0904 20:29:31.858701 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 20:29:31.871517 kubelet[2523]: E0904 20:29:31.869766 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 20:29:32.277907 containerd[1468]: time="2024-09-04T20:29:32.277837437Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:29:32.281844 containerd[1468]: time="2024-09-04T20:29:32.279588755Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1: active requests=0, bytes read=12907822" Sep 4 20:29:32.281844 containerd[1468]: time="2024-09-04T20:29:32.280705038Z" level=info msg="ImageCreate event name:\"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:29:32.284331 containerd[1468]: time="2024-09-04T20:29:32.284269333Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:29:32.285789 containerd[1468]: time="2024-09-04T20:29:32.285720972Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" with image id \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\", size \"14400175\" in 1.789513606s" Sep 4 20:29:32.285789 containerd[1468]: time="2024-09-04T20:29:32.285788558Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" returns image reference \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\"" Sep 4 20:29:32.295343 containerd[1468]: time="2024-09-04T20:29:32.295266333Z" level=info msg="CreateContainer within sandbox \"9dc32a33ce8f9bfe77e542d6efef5d1bb3712e061549c74093edcd3560aaf267\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 4 20:29:32.319440 containerd[1468]: time="2024-09-04T20:29:32.319376735Z" level=info msg="CreateContainer within sandbox \"9dc32a33ce8f9bfe77e542d6efef5d1bb3712e061549c74093edcd3560aaf267\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"6e7f5c8720e1c10000f32d32fa16b31fa182fde6fe8f19724c2542a2f36537b9\"" Sep 4 20:29:32.321054 containerd[1468]: time="2024-09-04T20:29:32.320970502Z" level=info msg="StartContainer for \"6e7f5c8720e1c10000f32d32fa16b31fa182fde6fe8f19724c2542a2f36537b9\"" Sep 4 20:29:32.396879 systemd[1]: Started cri-containerd-6e7f5c8720e1c10000f32d32fa16b31fa182fde6fe8f19724c2542a2f36537b9.scope - libcontainer container 6e7f5c8720e1c10000f32d32fa16b31fa182fde6fe8f19724c2542a2f36537b9. Sep 4 20:29:32.472013 containerd[1468]: time="2024-09-04T20:29:32.471895351Z" level=info msg="StartContainer for \"6e7f5c8720e1c10000f32d32fa16b31fa182fde6fe8f19724c2542a2f36537b9\" returns successfully" Sep 4 20:29:32.576860 kubelet[2523]: I0904 20:29:32.576701 2523 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 4 20:29:32.576860 kubelet[2523]: I0904 20:29:32.576794 2523 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 4 20:29:32.857097 kubelet[2523]: E0904 20:29:32.856954 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 20:29:32.891757 kubelet[2523]: I0904 20:29:32.891700 2523 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-wkgcv" podStartSLOduration=26.410946774 podCreationTimestamp="2024-09-04 20:29:00 +0000 UTC" firstStartedPulling="2024-09-04 20:29:25.808322891 +0000 UTC m=+46.651304962" lastFinishedPulling="2024-09-04 20:29:32.286407621 +0000 UTC m=+53.129389686" observedRunningTime="2024-09-04 20:29:32.888700563 +0000 UTC m=+53.731682648" watchObservedRunningTime="2024-09-04 20:29:32.889031498 +0000 UTC m=+53.732013589" Sep 4 20:29:33.476923 systemd[1]: Started sshd@11-143.198.146.52:22-139.178.68.195:36006.service - OpenSSH per-connection server daemon (139.178.68.195:36006). Sep 4 20:29:33.569290 sshd[4430]: Accepted publickey for core from 139.178.68.195 port 36006 ssh2: RSA SHA256:6m86ErQYPfwi49NZRVftW/USO9k3FxgPtHd71f+HMpY Sep 4 20:29:33.572627 sshd[4430]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 20:29:33.584696 systemd-logind[1447]: New session 12 of user core. Sep 4 20:29:33.590427 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 4 20:29:33.883589 sshd[4430]: pam_unix(sshd:session): session closed for user core Sep 4 20:29:33.889558 systemd-logind[1447]: Session 12 logged out. Waiting for processes to exit. Sep 4 20:29:33.889780 systemd[1]: sshd@11-143.198.146.52:22-139.178.68.195:36006.service: Deactivated successfully. Sep 4 20:29:33.892086 systemd[1]: session-12.scope: Deactivated successfully. Sep 4 20:29:33.894371 systemd-logind[1447]: Removed session 12. Sep 4 20:29:38.132871 kubelet[2523]: E0904 20:29:38.132800 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 20:29:38.908547 systemd[1]: Started sshd@12-143.198.146.52:22-139.178.68.195:55204.service - OpenSSH per-connection server daemon (139.178.68.195:55204). Sep 4 20:29:38.993202 sshd[4466]: Accepted publickey for core from 139.178.68.195 port 55204 ssh2: RSA SHA256:6m86ErQYPfwi49NZRVftW/USO9k3FxgPtHd71f+HMpY Sep 4 20:29:38.996568 sshd[4466]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 20:29:39.011574 systemd-logind[1447]: New session 13 of user core. Sep 4 20:29:39.016923 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 4 20:29:39.288327 sshd[4466]: pam_unix(sshd:session): session closed for user core Sep 4 20:29:39.304503 systemd[1]: sshd@12-143.198.146.52:22-139.178.68.195:55204.service: Deactivated successfully. Sep 4 20:29:39.310283 systemd[1]: session-13.scope: Deactivated successfully. Sep 4 20:29:39.317489 systemd-logind[1447]: Session 13 logged out. Waiting for processes to exit. Sep 4 20:29:39.328032 systemd[1]: Started sshd@13-143.198.146.52:22-139.178.68.195:55218.service - OpenSSH per-connection server daemon (139.178.68.195:55218). Sep 4 20:29:39.335274 systemd-logind[1447]: Removed session 13. Sep 4 20:29:39.409704 containerd[1468]: time="2024-09-04T20:29:39.409284930Z" level=info msg="StopPodSandbox for \"c20c5b63e11f42c5f851e1860eff3f0d0e4e32aa5c6cd2761dd97b96c0be2b58\"" Sep 4 20:29:39.422136 sshd[4480]: Accepted publickey for core from 139.178.68.195 port 55218 ssh2: RSA SHA256:6m86ErQYPfwi49NZRVftW/USO9k3FxgPtHd71f+HMpY Sep 4 20:29:39.422747 sshd[4480]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 20:29:39.439408 systemd-logind[1447]: New session 14 of user core. Sep 4 20:29:39.446429 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 4 20:29:39.704026 containerd[1468]: 2024-09-04 20:29:39.572 [WARNING][4496] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c20c5b63e11f42c5f851e1860eff3f0d0e4e32aa5c6cd2761dd97b96c0be2b58" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--5--b3ba9b7107-k8s-coredns--5dd5756b68--fk4tb-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"f347db77-68c1-4005-a444-424ceab37966", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 20, 28, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-5-b3ba9b7107", ContainerID:"c7b606b1cb58234bd709f60b39a0743f169667da1e0bde1b7da4ac77debeee45", Pod:"coredns-5dd5756b68-fk4tb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.6.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali496d6bea1b3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 20:29:39.704026 containerd[1468]: 2024-09-04 20:29:39.572 [INFO][4496] k8s.go 608: Cleaning up netns ContainerID="c20c5b63e11f42c5f851e1860eff3f0d0e4e32aa5c6cd2761dd97b96c0be2b58" Sep 4 20:29:39.704026 containerd[1468]: 2024-09-04 20:29:39.572 [INFO][4496] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="c20c5b63e11f42c5f851e1860eff3f0d0e4e32aa5c6cd2761dd97b96c0be2b58" iface="eth0" netns="" Sep 4 20:29:39.704026 containerd[1468]: 2024-09-04 20:29:39.572 [INFO][4496] k8s.go 615: Releasing IP address(es) ContainerID="c20c5b63e11f42c5f851e1860eff3f0d0e4e32aa5c6cd2761dd97b96c0be2b58" Sep 4 20:29:39.704026 containerd[1468]: 2024-09-04 20:29:39.573 [INFO][4496] utils.go 188: Calico CNI releasing IP address ContainerID="c20c5b63e11f42c5f851e1860eff3f0d0e4e32aa5c6cd2761dd97b96c0be2b58" Sep 4 20:29:39.704026 containerd[1468]: 2024-09-04 20:29:39.660 [INFO][4508] ipam_plugin.go 417: Releasing address using handleID ContainerID="c20c5b63e11f42c5f851e1860eff3f0d0e4e32aa5c6cd2761dd97b96c0be2b58" HandleID="k8s-pod-network.c20c5b63e11f42c5f851e1860eff3f0d0e4e32aa5c6cd2761dd97b96c0be2b58" Workload="ci--3975.2.1--5--b3ba9b7107-k8s-coredns--5dd5756b68--fk4tb-eth0" Sep 4 20:29:39.704026 containerd[1468]: 2024-09-04 20:29:39.663 [INFO][4508] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 20:29:39.704026 containerd[1468]: 2024-09-04 20:29:39.663 [INFO][4508] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 20:29:39.704026 containerd[1468]: 2024-09-04 20:29:39.689 [WARNING][4508] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="c20c5b63e11f42c5f851e1860eff3f0d0e4e32aa5c6cd2761dd97b96c0be2b58" HandleID="k8s-pod-network.c20c5b63e11f42c5f851e1860eff3f0d0e4e32aa5c6cd2761dd97b96c0be2b58" Workload="ci--3975.2.1--5--b3ba9b7107-k8s-coredns--5dd5756b68--fk4tb-eth0" Sep 4 20:29:39.704026 containerd[1468]: 2024-09-04 20:29:39.689 [INFO][4508] ipam_plugin.go 445: Releasing address using workloadID ContainerID="c20c5b63e11f42c5f851e1860eff3f0d0e4e32aa5c6cd2761dd97b96c0be2b58" HandleID="k8s-pod-network.c20c5b63e11f42c5f851e1860eff3f0d0e4e32aa5c6cd2761dd97b96c0be2b58" Workload="ci--3975.2.1--5--b3ba9b7107-k8s-coredns--5dd5756b68--fk4tb-eth0" Sep 4 20:29:39.704026 containerd[1468]: 2024-09-04 20:29:39.692 [INFO][4508] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 20:29:39.704026 containerd[1468]: 2024-09-04 20:29:39.696 [INFO][4496] k8s.go 621: Teardown processing complete. ContainerID="c20c5b63e11f42c5f851e1860eff3f0d0e4e32aa5c6cd2761dd97b96c0be2b58" Sep 4 20:29:39.708622 containerd[1468]: time="2024-09-04T20:29:39.705161006Z" level=info msg="TearDown network for sandbox \"c20c5b63e11f42c5f851e1860eff3f0d0e4e32aa5c6cd2761dd97b96c0be2b58\" successfully" Sep 4 20:29:39.708622 containerd[1468]: time="2024-09-04T20:29:39.706329019Z" level=info msg="StopPodSandbox for \"c20c5b63e11f42c5f851e1860eff3f0d0e4e32aa5c6cd2761dd97b96c0be2b58\" returns successfully" Sep 4 20:29:39.710812 containerd[1468]: time="2024-09-04T20:29:39.709444303Z" level=info msg="RemovePodSandbox for \"c20c5b63e11f42c5f851e1860eff3f0d0e4e32aa5c6cd2761dd97b96c0be2b58\"" Sep 4 20:29:39.710812 containerd[1468]: time="2024-09-04T20:29:39.710326962Z" level=info msg="Forcibly stopping sandbox \"c20c5b63e11f42c5f851e1860eff3f0d0e4e32aa5c6cd2761dd97b96c0be2b58\"" Sep 4 20:29:40.001511 containerd[1468]: 2024-09-04 20:29:39.878 [WARNING][4526] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c20c5b63e11f42c5f851e1860eff3f0d0e4e32aa5c6cd2761dd97b96c0be2b58" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--5--b3ba9b7107-k8s-coredns--5dd5756b68--fk4tb-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"f347db77-68c1-4005-a444-424ceab37966", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 20, 28, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-5-b3ba9b7107", ContainerID:"c7b606b1cb58234bd709f60b39a0743f169667da1e0bde1b7da4ac77debeee45", Pod:"coredns-5dd5756b68-fk4tb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.6.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali496d6bea1b3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 20:29:40.001511 containerd[1468]: 2024-09-04 20:29:39.884 [INFO][4526] k8s.go 608: Cleaning up netns ContainerID="c20c5b63e11f42c5f851e1860eff3f0d0e4e32aa5c6cd2761dd97b96c0be2b58" Sep 4 20:29:40.001511 containerd[1468]: 2024-09-04 20:29:39.884 [INFO][4526] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="c20c5b63e11f42c5f851e1860eff3f0d0e4e32aa5c6cd2761dd97b96c0be2b58" iface="eth0" netns="" Sep 4 20:29:40.001511 containerd[1468]: 2024-09-04 20:29:39.884 [INFO][4526] k8s.go 615: Releasing IP address(es) ContainerID="c20c5b63e11f42c5f851e1860eff3f0d0e4e32aa5c6cd2761dd97b96c0be2b58" Sep 4 20:29:40.001511 containerd[1468]: 2024-09-04 20:29:39.884 [INFO][4526] utils.go 188: Calico CNI releasing IP address ContainerID="c20c5b63e11f42c5f851e1860eff3f0d0e4e32aa5c6cd2761dd97b96c0be2b58" Sep 4 20:29:40.001511 containerd[1468]: 2024-09-04 20:29:39.969 [INFO][4532] ipam_plugin.go 417: Releasing address using handleID ContainerID="c20c5b63e11f42c5f851e1860eff3f0d0e4e32aa5c6cd2761dd97b96c0be2b58" HandleID="k8s-pod-network.c20c5b63e11f42c5f851e1860eff3f0d0e4e32aa5c6cd2761dd97b96c0be2b58" Workload="ci--3975.2.1--5--b3ba9b7107-k8s-coredns--5dd5756b68--fk4tb-eth0" Sep 4 20:29:40.001511 containerd[1468]: 2024-09-04 20:29:39.970 [INFO][4532] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 20:29:40.001511 containerd[1468]: 2024-09-04 20:29:39.970 [INFO][4532] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 20:29:40.001511 containerd[1468]: 2024-09-04 20:29:39.985 [WARNING][4532] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="c20c5b63e11f42c5f851e1860eff3f0d0e4e32aa5c6cd2761dd97b96c0be2b58" HandleID="k8s-pod-network.c20c5b63e11f42c5f851e1860eff3f0d0e4e32aa5c6cd2761dd97b96c0be2b58" Workload="ci--3975.2.1--5--b3ba9b7107-k8s-coredns--5dd5756b68--fk4tb-eth0" Sep 4 20:29:40.001511 containerd[1468]: 2024-09-04 20:29:39.985 [INFO][4532] ipam_plugin.go 445: Releasing address using workloadID ContainerID="c20c5b63e11f42c5f851e1860eff3f0d0e4e32aa5c6cd2761dd97b96c0be2b58" HandleID="k8s-pod-network.c20c5b63e11f42c5f851e1860eff3f0d0e4e32aa5c6cd2761dd97b96c0be2b58" Workload="ci--3975.2.1--5--b3ba9b7107-k8s-coredns--5dd5756b68--fk4tb-eth0" Sep 4 20:29:40.001511 containerd[1468]: 2024-09-04 20:29:39.992 [INFO][4532] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 20:29:40.001511 containerd[1468]: 2024-09-04 20:29:39.997 [INFO][4526] k8s.go 621: Teardown processing complete. ContainerID="c20c5b63e11f42c5f851e1860eff3f0d0e4e32aa5c6cd2761dd97b96c0be2b58" Sep 4 20:29:40.005899 containerd[1468]: time="2024-09-04T20:29:40.003887047Z" level=info msg="TearDown network for sandbox \"c20c5b63e11f42c5f851e1860eff3f0d0e4e32aa5c6cd2761dd97b96c0be2b58\" successfully" Sep 4 20:29:40.017565 containerd[1468]: time="2024-09-04T20:29:40.017502868Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c20c5b63e11f42c5f851e1860eff3f0d0e4e32aa5c6cd2761dd97b96c0be2b58\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 20:29:40.017904 containerd[1468]: time="2024-09-04T20:29:40.017870082Z" level=info msg="RemovePodSandbox \"c20c5b63e11f42c5f851e1860eff3f0d0e4e32aa5c6cd2761dd97b96c0be2b58\" returns successfully" Sep 4 20:29:40.020254 containerd[1468]: time="2024-09-04T20:29:40.018654468Z" level=info msg="StopPodSandbox for \"a94e4351a8bd5f534df25a54132f3fef85187c884ed48a216fb6a0c4009e2a6f\"" Sep 4 20:29:40.190039 containerd[1468]: 2024-09-04 20:29:40.112 [WARNING][4550] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a94e4351a8bd5f534df25a54132f3fef85187c884ed48a216fb6a0c4009e2a6f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--5--b3ba9b7107-k8s-csi--node--driver--wkgcv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"19278f8b-d3ea-467e-a88b-64888b0edecc", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 20, 29, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-5-b3ba9b7107", ContainerID:"9dc32a33ce8f9bfe77e542d6efef5d1bb3712e061549c74093edcd3560aaf267", Pod:"csi-node-driver-wkgcv", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.6.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calif8f6a08191f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 20:29:40.190039 containerd[1468]: 2024-09-04 20:29:40.113 [INFO][4550] k8s.go 608: Cleaning up netns ContainerID="a94e4351a8bd5f534df25a54132f3fef85187c884ed48a216fb6a0c4009e2a6f" Sep 4 20:29:40.190039 containerd[1468]: 2024-09-04 20:29:40.113 [INFO][4550] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="a94e4351a8bd5f534df25a54132f3fef85187c884ed48a216fb6a0c4009e2a6f" iface="eth0" netns="" Sep 4 20:29:40.190039 containerd[1468]: 2024-09-04 20:29:40.113 [INFO][4550] k8s.go 615: Releasing IP address(es) ContainerID="a94e4351a8bd5f534df25a54132f3fef85187c884ed48a216fb6a0c4009e2a6f" Sep 4 20:29:40.190039 containerd[1468]: 2024-09-04 20:29:40.113 [INFO][4550] utils.go 188: Calico CNI releasing IP address ContainerID="a94e4351a8bd5f534df25a54132f3fef85187c884ed48a216fb6a0c4009e2a6f" Sep 4 20:29:40.190039 containerd[1468]: 2024-09-04 20:29:40.151 [INFO][4556] ipam_plugin.go 417: Releasing address using handleID ContainerID="a94e4351a8bd5f534df25a54132f3fef85187c884ed48a216fb6a0c4009e2a6f" HandleID="k8s-pod-network.a94e4351a8bd5f534df25a54132f3fef85187c884ed48a216fb6a0c4009e2a6f" Workload="ci--3975.2.1--5--b3ba9b7107-k8s-csi--node--driver--wkgcv-eth0" Sep 4 20:29:40.190039 containerd[1468]: 2024-09-04 20:29:40.151 [INFO][4556] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 20:29:40.190039 containerd[1468]: 2024-09-04 20:29:40.151 [INFO][4556] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 20:29:40.190039 containerd[1468]: 2024-09-04 20:29:40.163 [WARNING][4556] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="a94e4351a8bd5f534df25a54132f3fef85187c884ed48a216fb6a0c4009e2a6f" HandleID="k8s-pod-network.a94e4351a8bd5f534df25a54132f3fef85187c884ed48a216fb6a0c4009e2a6f" Workload="ci--3975.2.1--5--b3ba9b7107-k8s-csi--node--driver--wkgcv-eth0" Sep 4 20:29:40.190039 containerd[1468]: 2024-09-04 20:29:40.163 [INFO][4556] ipam_plugin.go 445: Releasing address using workloadID ContainerID="a94e4351a8bd5f534df25a54132f3fef85187c884ed48a216fb6a0c4009e2a6f" HandleID="k8s-pod-network.a94e4351a8bd5f534df25a54132f3fef85187c884ed48a216fb6a0c4009e2a6f" Workload="ci--3975.2.1--5--b3ba9b7107-k8s-csi--node--driver--wkgcv-eth0" Sep 4 20:29:40.190039 containerd[1468]: 2024-09-04 20:29:40.176 [INFO][4556] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 20:29:40.190039 containerd[1468]: 2024-09-04 20:29:40.182 [INFO][4550] k8s.go 621: Teardown processing complete. ContainerID="a94e4351a8bd5f534df25a54132f3fef85187c884ed48a216fb6a0c4009e2a6f" Sep 4 20:29:40.190984 containerd[1468]: time="2024-09-04T20:29:40.190931092Z" level=info msg="TearDown network for sandbox \"a94e4351a8bd5f534df25a54132f3fef85187c884ed48a216fb6a0c4009e2a6f\" successfully" Sep 4 20:29:40.191150 containerd[1468]: time="2024-09-04T20:29:40.191126632Z" level=info msg="StopPodSandbox for \"a94e4351a8bd5f534df25a54132f3fef85187c884ed48a216fb6a0c4009e2a6f\" returns successfully" Sep 4 20:29:40.193531 containerd[1468]: time="2024-09-04T20:29:40.193476584Z" level=info msg="RemovePodSandbox for \"a94e4351a8bd5f534df25a54132f3fef85187c884ed48a216fb6a0c4009e2a6f\"" Sep 4 20:29:40.193771 containerd[1468]: time="2024-09-04T20:29:40.193740796Z" level=info msg="Forcibly stopping sandbox \"a94e4351a8bd5f534df25a54132f3fef85187c884ed48a216fb6a0c4009e2a6f\"" Sep 4 20:29:40.284176 sshd[4480]: pam_unix(sshd:session): session closed for user core Sep 4 20:29:40.314200 systemd[1]: Started sshd@14-143.198.146.52:22-139.178.68.195:55228.service - OpenSSH per-connection server daemon (139.178.68.195:55228). Sep 4 20:29:40.315180 systemd[1]: sshd@13-143.198.146.52:22-139.178.68.195:55218.service: Deactivated successfully. Sep 4 20:29:40.322560 systemd[1]: session-14.scope: Deactivated successfully. Sep 4 20:29:40.336158 systemd-logind[1447]: Session 14 logged out. Waiting for processes to exit. Sep 4 20:29:40.342343 systemd-logind[1447]: Removed session 14. Sep 4 20:29:40.465006 containerd[1468]: 2024-09-04 20:29:40.378 [WARNING][4573] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a94e4351a8bd5f534df25a54132f3fef85187c884ed48a216fb6a0c4009e2a6f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--5--b3ba9b7107-k8s-csi--node--driver--wkgcv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"19278f8b-d3ea-467e-a88b-64888b0edecc", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 20, 29, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-5-b3ba9b7107", ContainerID:"9dc32a33ce8f9bfe77e542d6efef5d1bb3712e061549c74093edcd3560aaf267", Pod:"csi-node-driver-wkgcv", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.6.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calif8f6a08191f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 20:29:40.465006 containerd[1468]: 2024-09-04 20:29:40.379 [INFO][4573] k8s.go 608: Cleaning up netns ContainerID="a94e4351a8bd5f534df25a54132f3fef85187c884ed48a216fb6a0c4009e2a6f" Sep 4 20:29:40.465006 containerd[1468]: 2024-09-04 20:29:40.379 [INFO][4573] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="a94e4351a8bd5f534df25a54132f3fef85187c884ed48a216fb6a0c4009e2a6f" iface="eth0" netns="" Sep 4 20:29:40.465006 containerd[1468]: 2024-09-04 20:29:40.382 [INFO][4573] k8s.go 615: Releasing IP address(es) ContainerID="a94e4351a8bd5f534df25a54132f3fef85187c884ed48a216fb6a0c4009e2a6f" Sep 4 20:29:40.465006 containerd[1468]: 2024-09-04 20:29:40.382 [INFO][4573] utils.go 188: Calico CNI releasing IP address ContainerID="a94e4351a8bd5f534df25a54132f3fef85187c884ed48a216fb6a0c4009e2a6f" Sep 4 20:29:40.465006 containerd[1468]: 2024-09-04 20:29:40.430 [INFO][4583] ipam_plugin.go 417: Releasing address using handleID ContainerID="a94e4351a8bd5f534df25a54132f3fef85187c884ed48a216fb6a0c4009e2a6f" HandleID="k8s-pod-network.a94e4351a8bd5f534df25a54132f3fef85187c884ed48a216fb6a0c4009e2a6f" Workload="ci--3975.2.1--5--b3ba9b7107-k8s-csi--node--driver--wkgcv-eth0" Sep 4 20:29:40.465006 containerd[1468]: 2024-09-04 20:29:40.430 [INFO][4583] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 20:29:40.465006 containerd[1468]: 2024-09-04 20:29:40.431 [INFO][4583] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 20:29:40.465006 containerd[1468]: 2024-09-04 20:29:40.450 [WARNING][4583] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="a94e4351a8bd5f534df25a54132f3fef85187c884ed48a216fb6a0c4009e2a6f" HandleID="k8s-pod-network.a94e4351a8bd5f534df25a54132f3fef85187c884ed48a216fb6a0c4009e2a6f" Workload="ci--3975.2.1--5--b3ba9b7107-k8s-csi--node--driver--wkgcv-eth0" Sep 4 20:29:40.465006 containerd[1468]: 2024-09-04 20:29:40.450 [INFO][4583] ipam_plugin.go 445: Releasing address using workloadID ContainerID="a94e4351a8bd5f534df25a54132f3fef85187c884ed48a216fb6a0c4009e2a6f" HandleID="k8s-pod-network.a94e4351a8bd5f534df25a54132f3fef85187c884ed48a216fb6a0c4009e2a6f" Workload="ci--3975.2.1--5--b3ba9b7107-k8s-csi--node--driver--wkgcv-eth0" Sep 4 20:29:40.465006 containerd[1468]: 2024-09-04 20:29:40.456 [INFO][4583] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 20:29:40.465006 containerd[1468]: 2024-09-04 20:29:40.461 [INFO][4573] k8s.go 621: Teardown processing complete. ContainerID="a94e4351a8bd5f534df25a54132f3fef85187c884ed48a216fb6a0c4009e2a6f" Sep 4 20:29:40.467668 containerd[1468]: time="2024-09-04T20:29:40.465025581Z" level=info msg="TearDown network for sandbox \"a94e4351a8bd5f534df25a54132f3fef85187c884ed48a216fb6a0c4009e2a6f\" successfully" Sep 4 20:29:40.469985 containerd[1468]: time="2024-09-04T20:29:40.469904306Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a94e4351a8bd5f534df25a54132f3fef85187c884ed48a216fb6a0c4009e2a6f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 20:29:40.470580 containerd[1468]: time="2024-09-04T20:29:40.470024666Z" level=info msg="RemovePodSandbox \"a94e4351a8bd5f534df25a54132f3fef85187c884ed48a216fb6a0c4009e2a6f\" returns successfully" Sep 4 20:29:40.471146 containerd[1468]: time="2024-09-04T20:29:40.471116160Z" level=info msg="StopPodSandbox for \"325efa9ce30fd3869d142cc34879fb6ef9d38b4f2e106555f839fca57f5b4a31\"" Sep 4 20:29:40.480424 sshd[4579]: Accepted publickey for core from 139.178.68.195 port 55228 ssh2: RSA SHA256:6m86ErQYPfwi49NZRVftW/USO9k3FxgPtHd71f+HMpY Sep 4 20:29:40.484384 sshd[4579]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 20:29:40.504299 systemd-logind[1447]: New session 15 of user core. Sep 4 20:29:40.508033 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 4 20:29:40.685522 containerd[1468]: 2024-09-04 20:29:40.595 [WARNING][4602] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="325efa9ce30fd3869d142cc34879fb6ef9d38b4f2e106555f839fca57f5b4a31" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--5--b3ba9b7107-k8s-coredns--5dd5756b68--966rt-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"f15b71f8-033b-4890-a1b4-2a97c47ca461", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 20, 28, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-5-b3ba9b7107", ContainerID:"44c2399eba62b27b0b449a89574db37f1f2545c3f9e3f28a435c3df59630ff7c", Pod:"coredns-5dd5756b68-966rt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.6.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif5e08b3f82b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 20:29:40.685522 containerd[1468]: 2024-09-04 20:29:40.595 [INFO][4602] k8s.go 608: Cleaning up netns ContainerID="325efa9ce30fd3869d142cc34879fb6ef9d38b4f2e106555f839fca57f5b4a31" Sep 4 20:29:40.685522 containerd[1468]: 2024-09-04 20:29:40.595 [INFO][4602] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="325efa9ce30fd3869d142cc34879fb6ef9d38b4f2e106555f839fca57f5b4a31" iface="eth0" netns="" Sep 4 20:29:40.685522 containerd[1468]: 2024-09-04 20:29:40.595 [INFO][4602] k8s.go 615: Releasing IP address(es) ContainerID="325efa9ce30fd3869d142cc34879fb6ef9d38b4f2e106555f839fca57f5b4a31" Sep 4 20:29:40.685522 containerd[1468]: 2024-09-04 20:29:40.596 [INFO][4602] utils.go 188: Calico CNI releasing IP address ContainerID="325efa9ce30fd3869d142cc34879fb6ef9d38b4f2e106555f839fca57f5b4a31" Sep 4 20:29:40.685522 containerd[1468]: 2024-09-04 20:29:40.657 [INFO][4613] ipam_plugin.go 417: Releasing address using handleID ContainerID="325efa9ce30fd3869d142cc34879fb6ef9d38b4f2e106555f839fca57f5b4a31" HandleID="k8s-pod-network.325efa9ce30fd3869d142cc34879fb6ef9d38b4f2e106555f839fca57f5b4a31" Workload="ci--3975.2.1--5--b3ba9b7107-k8s-coredns--5dd5756b68--966rt-eth0" Sep 4 20:29:40.685522 containerd[1468]: 2024-09-04 20:29:40.657 [INFO][4613] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 20:29:40.685522 containerd[1468]: 2024-09-04 20:29:40.657 [INFO][4613] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 20:29:40.685522 containerd[1468]: 2024-09-04 20:29:40.671 [WARNING][4613] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="325efa9ce30fd3869d142cc34879fb6ef9d38b4f2e106555f839fca57f5b4a31" HandleID="k8s-pod-network.325efa9ce30fd3869d142cc34879fb6ef9d38b4f2e106555f839fca57f5b4a31" Workload="ci--3975.2.1--5--b3ba9b7107-k8s-coredns--5dd5756b68--966rt-eth0" Sep 4 20:29:40.685522 containerd[1468]: 2024-09-04 20:29:40.671 [INFO][4613] ipam_plugin.go 445: Releasing address using workloadID ContainerID="325efa9ce30fd3869d142cc34879fb6ef9d38b4f2e106555f839fca57f5b4a31" HandleID="k8s-pod-network.325efa9ce30fd3869d142cc34879fb6ef9d38b4f2e106555f839fca57f5b4a31" Workload="ci--3975.2.1--5--b3ba9b7107-k8s-coredns--5dd5756b68--966rt-eth0" Sep 4 20:29:40.685522 containerd[1468]: 2024-09-04 20:29:40.677 [INFO][4613] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 20:29:40.685522 containerd[1468]: 2024-09-04 20:29:40.682 [INFO][4602] k8s.go 621: Teardown processing complete. ContainerID="325efa9ce30fd3869d142cc34879fb6ef9d38b4f2e106555f839fca57f5b4a31" Sep 4 20:29:40.685522 containerd[1468]: time="2024-09-04T20:29:40.685223215Z" level=info msg="TearDown network for sandbox \"325efa9ce30fd3869d142cc34879fb6ef9d38b4f2e106555f839fca57f5b4a31\" successfully" Sep 4 20:29:40.685522 containerd[1468]: time="2024-09-04T20:29:40.685278730Z" level=info msg="StopPodSandbox for \"325efa9ce30fd3869d142cc34879fb6ef9d38b4f2e106555f839fca57f5b4a31\" returns successfully" Sep 4 20:29:40.688509 containerd[1468]: time="2024-09-04T20:29:40.687403299Z" level=info msg="RemovePodSandbox for \"325efa9ce30fd3869d142cc34879fb6ef9d38b4f2e106555f839fca57f5b4a31\"" Sep 4 20:29:40.688509 containerd[1468]: time="2024-09-04T20:29:40.687452272Z" level=info msg="Forcibly stopping sandbox \"325efa9ce30fd3869d142cc34879fb6ef9d38b4f2e106555f839fca57f5b4a31\"" Sep 4 20:29:40.821645 sshd[4579]: pam_unix(sshd:session): session closed for user core Sep 4 20:29:40.830162 systemd-logind[1447]: Session 15 logged out. Waiting for processes to exit. Sep 4 20:29:40.830761 systemd[1]: sshd@14-143.198.146.52:22-139.178.68.195:55228.service: Deactivated successfully. Sep 4 20:29:40.837116 systemd[1]: session-15.scope: Deactivated successfully. Sep 4 20:29:40.841163 systemd-logind[1447]: Removed session 15. Sep 4 20:29:40.875497 containerd[1468]: 2024-09-04 20:29:40.777 [WARNING][4636] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="325efa9ce30fd3869d142cc34879fb6ef9d38b4f2e106555f839fca57f5b4a31" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--5--b3ba9b7107-k8s-coredns--5dd5756b68--966rt-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"f15b71f8-033b-4890-a1b4-2a97c47ca461", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 20, 28, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-5-b3ba9b7107", ContainerID:"44c2399eba62b27b0b449a89574db37f1f2545c3f9e3f28a435c3df59630ff7c", Pod:"coredns-5dd5756b68-966rt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.6.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif5e08b3f82b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 20:29:40.875497 containerd[1468]: 2024-09-04 20:29:40.778 [INFO][4636] k8s.go 608: Cleaning up netns ContainerID="325efa9ce30fd3869d142cc34879fb6ef9d38b4f2e106555f839fca57f5b4a31" Sep 4 20:29:40.875497 containerd[1468]: 2024-09-04 20:29:40.778 [INFO][4636] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="325efa9ce30fd3869d142cc34879fb6ef9d38b4f2e106555f839fca57f5b4a31" iface="eth0" netns="" Sep 4 20:29:40.875497 containerd[1468]: 2024-09-04 20:29:40.778 [INFO][4636] k8s.go 615: Releasing IP address(es) ContainerID="325efa9ce30fd3869d142cc34879fb6ef9d38b4f2e106555f839fca57f5b4a31" Sep 4 20:29:40.875497 containerd[1468]: 2024-09-04 20:29:40.778 [INFO][4636] utils.go 188: Calico CNI releasing IP address ContainerID="325efa9ce30fd3869d142cc34879fb6ef9d38b4f2e106555f839fca57f5b4a31" Sep 4 20:29:40.875497 containerd[1468]: 2024-09-04 20:29:40.854 [INFO][4642] ipam_plugin.go 417: Releasing address using handleID ContainerID="325efa9ce30fd3869d142cc34879fb6ef9d38b4f2e106555f839fca57f5b4a31" HandleID="k8s-pod-network.325efa9ce30fd3869d142cc34879fb6ef9d38b4f2e106555f839fca57f5b4a31" Workload="ci--3975.2.1--5--b3ba9b7107-k8s-coredns--5dd5756b68--966rt-eth0" Sep 4 20:29:40.875497 containerd[1468]: 2024-09-04 20:29:40.855 [INFO][4642] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 20:29:40.875497 containerd[1468]: 2024-09-04 20:29:40.855 [INFO][4642] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 20:29:40.875497 containerd[1468]: 2024-09-04 20:29:40.866 [WARNING][4642] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="325efa9ce30fd3869d142cc34879fb6ef9d38b4f2e106555f839fca57f5b4a31" HandleID="k8s-pod-network.325efa9ce30fd3869d142cc34879fb6ef9d38b4f2e106555f839fca57f5b4a31" Workload="ci--3975.2.1--5--b3ba9b7107-k8s-coredns--5dd5756b68--966rt-eth0" Sep 4 20:29:40.875497 containerd[1468]: 2024-09-04 20:29:40.866 [INFO][4642] ipam_plugin.go 445: Releasing address using workloadID ContainerID="325efa9ce30fd3869d142cc34879fb6ef9d38b4f2e106555f839fca57f5b4a31" HandleID="k8s-pod-network.325efa9ce30fd3869d142cc34879fb6ef9d38b4f2e106555f839fca57f5b4a31" Workload="ci--3975.2.1--5--b3ba9b7107-k8s-coredns--5dd5756b68--966rt-eth0" Sep 4 20:29:40.875497 containerd[1468]: 2024-09-04 20:29:40.869 [INFO][4642] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 20:29:40.875497 containerd[1468]: 2024-09-04 20:29:40.872 [INFO][4636] k8s.go 621: Teardown processing complete. ContainerID="325efa9ce30fd3869d142cc34879fb6ef9d38b4f2e106555f839fca57f5b4a31" Sep 4 20:29:40.878405 containerd[1468]: time="2024-09-04T20:29:40.876827275Z" level=info msg="TearDown network for sandbox \"325efa9ce30fd3869d142cc34879fb6ef9d38b4f2e106555f839fca57f5b4a31\" successfully" Sep 4 20:29:40.886282 containerd[1468]: time="2024-09-04T20:29:40.886212846Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"325efa9ce30fd3869d142cc34879fb6ef9d38b4f2e106555f839fca57f5b4a31\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 20:29:40.886541 containerd[1468]: time="2024-09-04T20:29:40.886521216Z" level=info msg="RemovePodSandbox \"325efa9ce30fd3869d142cc34879fb6ef9d38b4f2e106555f839fca57f5b4a31\" returns successfully" Sep 4 20:29:40.887491 containerd[1468]: time="2024-09-04T20:29:40.887426425Z" level=info msg="StopPodSandbox for \"28b8d38c9da2f890fe1c6b71d6e31746a3708691e1e16dd95d179008d2b85049\"" Sep 4 20:29:41.023748 containerd[1468]: 2024-09-04 20:29:40.962 [WARNING][4663] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="28b8d38c9da2f890fe1c6b71d6e31746a3708691e1e16dd95d179008d2b85049" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--5--b3ba9b7107-k8s-calico--kube--controllers--d85dc74c8--bfxxg-eth0", GenerateName:"calico-kube-controllers-d85dc74c8-", Namespace:"calico-system", SelfLink:"", UID:"e4b0c9ae-e6bb-4436-8d94-867c1964daae", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 20, 29, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"d85dc74c8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-5-b3ba9b7107", ContainerID:"23d93c9190d4e7d3be1864d02b137d148cc40d2471c8e2080a8e4a46770f424b", Pod:"calico-kube-controllers-d85dc74c8-bfxxg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.6.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califec74167e17", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 20:29:41.023748 containerd[1468]: 2024-09-04 20:29:40.962 [INFO][4663] k8s.go 608: Cleaning up netns ContainerID="28b8d38c9da2f890fe1c6b71d6e31746a3708691e1e16dd95d179008d2b85049" Sep 4 20:29:41.023748 containerd[1468]: 2024-09-04 20:29:40.962 [INFO][4663] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="28b8d38c9da2f890fe1c6b71d6e31746a3708691e1e16dd95d179008d2b85049" iface="eth0" netns="" Sep 4 20:29:41.023748 containerd[1468]: 2024-09-04 20:29:40.962 [INFO][4663] k8s.go 615: Releasing IP address(es) ContainerID="28b8d38c9da2f890fe1c6b71d6e31746a3708691e1e16dd95d179008d2b85049" Sep 4 20:29:41.023748 containerd[1468]: 2024-09-04 20:29:40.962 [INFO][4663] utils.go 188: Calico CNI releasing IP address ContainerID="28b8d38c9da2f890fe1c6b71d6e31746a3708691e1e16dd95d179008d2b85049" Sep 4 20:29:41.023748 containerd[1468]: 2024-09-04 20:29:41.001 [INFO][4669] ipam_plugin.go 417: Releasing address using handleID ContainerID="28b8d38c9da2f890fe1c6b71d6e31746a3708691e1e16dd95d179008d2b85049" HandleID="k8s-pod-network.28b8d38c9da2f890fe1c6b71d6e31746a3708691e1e16dd95d179008d2b85049" Workload="ci--3975.2.1--5--b3ba9b7107-k8s-calico--kube--controllers--d85dc74c8--bfxxg-eth0" Sep 4 20:29:41.023748 containerd[1468]: 2024-09-04 20:29:41.002 [INFO][4669] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 20:29:41.023748 containerd[1468]: 2024-09-04 20:29:41.002 [INFO][4669] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 20:29:41.023748 containerd[1468]: 2024-09-04 20:29:41.011 [WARNING][4669] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="28b8d38c9da2f890fe1c6b71d6e31746a3708691e1e16dd95d179008d2b85049" HandleID="k8s-pod-network.28b8d38c9da2f890fe1c6b71d6e31746a3708691e1e16dd95d179008d2b85049" Workload="ci--3975.2.1--5--b3ba9b7107-k8s-calico--kube--controllers--d85dc74c8--bfxxg-eth0" Sep 4 20:29:41.023748 containerd[1468]: 2024-09-04 20:29:41.011 [INFO][4669] ipam_plugin.go 445: Releasing address using workloadID ContainerID="28b8d38c9da2f890fe1c6b71d6e31746a3708691e1e16dd95d179008d2b85049" HandleID="k8s-pod-network.28b8d38c9da2f890fe1c6b71d6e31746a3708691e1e16dd95d179008d2b85049" Workload="ci--3975.2.1--5--b3ba9b7107-k8s-calico--kube--controllers--d85dc74c8--bfxxg-eth0" Sep 4 20:29:41.023748 containerd[1468]: 2024-09-04 20:29:41.016 [INFO][4669] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 20:29:41.023748 containerd[1468]: 2024-09-04 20:29:41.019 [INFO][4663] k8s.go 621: Teardown processing complete. ContainerID="28b8d38c9da2f890fe1c6b71d6e31746a3708691e1e16dd95d179008d2b85049" Sep 4 20:29:41.023748 containerd[1468]: time="2024-09-04T20:29:41.023650500Z" level=info msg="TearDown network for sandbox \"28b8d38c9da2f890fe1c6b71d6e31746a3708691e1e16dd95d179008d2b85049\" successfully" Sep 4 20:29:41.023748 containerd[1468]: time="2024-09-04T20:29:41.023687640Z" level=info msg="StopPodSandbox for \"28b8d38c9da2f890fe1c6b71d6e31746a3708691e1e16dd95d179008d2b85049\" returns successfully" Sep 4 20:29:41.025333 containerd[1468]: time="2024-09-04T20:29:41.024557840Z" level=info msg="RemovePodSandbox for \"28b8d38c9da2f890fe1c6b71d6e31746a3708691e1e16dd95d179008d2b85049\"" Sep 4 20:29:41.025333 containerd[1468]: time="2024-09-04T20:29:41.024594082Z" level=info msg="Forcibly stopping sandbox \"28b8d38c9da2f890fe1c6b71d6e31746a3708691e1e16dd95d179008d2b85049\"" Sep 4 20:29:41.142841 containerd[1468]: 2024-09-04 20:29:41.087 [WARNING][4687] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="28b8d38c9da2f890fe1c6b71d6e31746a3708691e1e16dd95d179008d2b85049" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--5--b3ba9b7107-k8s-calico--kube--controllers--d85dc74c8--bfxxg-eth0", GenerateName:"calico-kube-controllers-d85dc74c8-", Namespace:"calico-system", SelfLink:"", UID:"e4b0c9ae-e6bb-4436-8d94-867c1964daae", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 20, 29, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"d85dc74c8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-5-b3ba9b7107", ContainerID:"23d93c9190d4e7d3be1864d02b137d148cc40d2471c8e2080a8e4a46770f424b", Pod:"calico-kube-controllers-d85dc74c8-bfxxg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.6.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califec74167e17", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 20:29:41.142841 containerd[1468]: 2024-09-04 20:29:41.088 [INFO][4687] k8s.go 608: Cleaning up netns ContainerID="28b8d38c9da2f890fe1c6b71d6e31746a3708691e1e16dd95d179008d2b85049" Sep 4 20:29:41.142841 containerd[1468]: 2024-09-04 20:29:41.088 [INFO][4687] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="28b8d38c9da2f890fe1c6b71d6e31746a3708691e1e16dd95d179008d2b85049" iface="eth0" netns="" Sep 4 20:29:41.142841 containerd[1468]: 2024-09-04 20:29:41.088 [INFO][4687] k8s.go 615: Releasing IP address(es) ContainerID="28b8d38c9da2f890fe1c6b71d6e31746a3708691e1e16dd95d179008d2b85049" Sep 4 20:29:41.142841 containerd[1468]: 2024-09-04 20:29:41.088 [INFO][4687] utils.go 188: Calico CNI releasing IP address ContainerID="28b8d38c9da2f890fe1c6b71d6e31746a3708691e1e16dd95d179008d2b85049" Sep 4 20:29:41.142841 containerd[1468]: 2024-09-04 20:29:41.124 [INFO][4693] ipam_plugin.go 417: Releasing address using handleID ContainerID="28b8d38c9da2f890fe1c6b71d6e31746a3708691e1e16dd95d179008d2b85049" HandleID="k8s-pod-network.28b8d38c9da2f890fe1c6b71d6e31746a3708691e1e16dd95d179008d2b85049" Workload="ci--3975.2.1--5--b3ba9b7107-k8s-calico--kube--controllers--d85dc74c8--bfxxg-eth0" Sep 4 20:29:41.142841 containerd[1468]: 2024-09-04 20:29:41.125 [INFO][4693] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 20:29:41.142841 containerd[1468]: 2024-09-04 20:29:41.125 [INFO][4693] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 20:29:41.142841 containerd[1468]: 2024-09-04 20:29:41.133 [WARNING][4693] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="28b8d38c9da2f890fe1c6b71d6e31746a3708691e1e16dd95d179008d2b85049" HandleID="k8s-pod-network.28b8d38c9da2f890fe1c6b71d6e31746a3708691e1e16dd95d179008d2b85049" Workload="ci--3975.2.1--5--b3ba9b7107-k8s-calico--kube--controllers--d85dc74c8--bfxxg-eth0" Sep 4 20:29:41.142841 containerd[1468]: 2024-09-04 20:29:41.134 [INFO][4693] ipam_plugin.go 445: Releasing address using workloadID ContainerID="28b8d38c9da2f890fe1c6b71d6e31746a3708691e1e16dd95d179008d2b85049" HandleID="k8s-pod-network.28b8d38c9da2f890fe1c6b71d6e31746a3708691e1e16dd95d179008d2b85049" Workload="ci--3975.2.1--5--b3ba9b7107-k8s-calico--kube--controllers--d85dc74c8--bfxxg-eth0" Sep 4 20:29:41.142841 containerd[1468]: 2024-09-04 20:29:41.136 [INFO][4693] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 20:29:41.142841 containerd[1468]: 2024-09-04 20:29:41.139 [INFO][4687] k8s.go 621: Teardown processing complete. ContainerID="28b8d38c9da2f890fe1c6b71d6e31746a3708691e1e16dd95d179008d2b85049" Sep 4 20:29:41.144717 containerd[1468]: time="2024-09-04T20:29:41.143160182Z" level=info msg="TearDown network for sandbox \"28b8d38c9da2f890fe1c6b71d6e31746a3708691e1e16dd95d179008d2b85049\" successfully" Sep 4 20:29:41.158593 containerd[1468]: time="2024-09-04T20:29:41.158157398Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"28b8d38c9da2f890fe1c6b71d6e31746a3708691e1e16dd95d179008d2b85049\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 20:29:41.158593 containerd[1468]: time="2024-09-04T20:29:41.158237669Z" level=info msg="RemovePodSandbox \"28b8d38c9da2f890fe1c6b71d6e31746a3708691e1e16dd95d179008d2b85049\" returns successfully" Sep 4 20:29:45.839535 systemd[1]: Started sshd@15-143.198.146.52:22-139.178.68.195:55230.service - OpenSSH per-connection server daemon (139.178.68.195:55230). Sep 4 20:29:45.889225 sshd[4732]: Accepted publickey for core from 139.178.68.195 port 55230 ssh2: RSA SHA256:6m86ErQYPfwi49NZRVftW/USO9k3FxgPtHd71f+HMpY Sep 4 20:29:45.891264 sshd[4732]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 20:29:45.898229 systemd-logind[1447]: New session 16 of user core. Sep 4 20:29:45.904417 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 4 20:29:46.072938 sshd[4732]: pam_unix(sshd:session): session closed for user core Sep 4 20:29:46.078539 systemd-logind[1447]: Session 16 logged out. Waiting for processes to exit. Sep 4 20:29:46.079434 systemd[1]: sshd@15-143.198.146.52:22-139.178.68.195:55230.service: Deactivated successfully. Sep 4 20:29:46.082402 systemd[1]: session-16.scope: Deactivated successfully. Sep 4 20:29:46.084514 systemd-logind[1447]: Removed session 16. Sep 4 20:29:51.090500 systemd[1]: Started sshd@16-143.198.146.52:22-139.178.68.195:51076.service - OpenSSH per-connection server daemon (139.178.68.195:51076). Sep 4 20:29:51.152200 sshd[4748]: Accepted publickey for core from 139.178.68.195 port 51076 ssh2: RSA SHA256:6m86ErQYPfwi49NZRVftW/USO9k3FxgPtHd71f+HMpY Sep 4 20:29:51.154168 sshd[4748]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 20:29:51.160281 systemd-logind[1447]: New session 17 of user core. Sep 4 20:29:51.167587 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 4 20:29:51.314488 sshd[4748]: pam_unix(sshd:session): session closed for user core Sep 4 20:29:51.319843 systemd[1]: sshd@16-143.198.146.52:22-139.178.68.195:51076.service: Deactivated successfully. Sep 4 20:29:51.322849 systemd[1]: session-17.scope: Deactivated successfully. Sep 4 20:29:51.323991 systemd-logind[1447]: Session 17 logged out. Waiting for processes to exit. Sep 4 20:29:51.325858 systemd-logind[1447]: Removed session 17. Sep 4 20:29:52.357515 kubelet[2523]: E0904 20:29:52.357461 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 20:29:53.608469 systemd[1]: run-containerd-runc-k8s.io-468b302734e18195ebe48bacb1f6be21a92bef9a6cebeb0d847df276275b448a-runc.BXgI0U.mount: Deactivated successfully. Sep 4 20:29:56.334641 systemd[1]: Started sshd@17-143.198.146.52:22-139.178.68.195:47988.service - OpenSSH per-connection server daemon (139.178.68.195:47988). Sep 4 20:29:56.385870 sshd[4787]: Accepted publickey for core from 139.178.68.195 port 47988 ssh2: RSA SHA256:6m86ErQYPfwi49NZRVftW/USO9k3FxgPtHd71f+HMpY Sep 4 20:29:56.387738 sshd[4787]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 20:29:56.393156 systemd-logind[1447]: New session 18 of user core. Sep 4 20:29:56.397316 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 4 20:29:56.530623 sshd[4787]: pam_unix(sshd:session): session closed for user core Sep 4 20:29:56.535075 systemd[1]: sshd@17-143.198.146.52:22-139.178.68.195:47988.service: Deactivated successfully. Sep 4 20:29:56.537169 systemd[1]: session-18.scope: Deactivated successfully. Sep 4 20:29:56.538257 systemd-logind[1447]: Session 18 logged out. Waiting for processes to exit. Sep 4 20:29:56.540225 systemd-logind[1447]: Removed session 18. Sep 4 20:29:59.788867 kubelet[2523]: I0904 20:29:59.788801 2523 topology_manager.go:215] "Topology Admit Handler" podUID="dff6fc86-adaa-4cd8-878d-8a30632daeaa" podNamespace="calico-apiserver" podName="calico-apiserver-b4879d89f-7plks" Sep 4 20:29:59.812922 systemd[1]: Created slice kubepods-besteffort-poddff6fc86_adaa_4cd8_878d_8a30632daeaa.slice - libcontainer container kubepods-besteffort-poddff6fc86_adaa_4cd8_878d_8a30632daeaa.slice. Sep 4 20:29:59.853551 kubelet[2523]: I0904 20:29:59.853314 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/dff6fc86-adaa-4cd8-878d-8a30632daeaa-calico-apiserver-certs\") pod \"calico-apiserver-b4879d89f-7plks\" (UID: \"dff6fc86-adaa-4cd8-878d-8a30632daeaa\") " pod="calico-apiserver/calico-apiserver-b4879d89f-7plks" Sep 4 20:29:59.853551 kubelet[2523]: I0904 20:29:59.853378 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ct74\" (UniqueName: \"kubernetes.io/projected/dff6fc86-adaa-4cd8-878d-8a30632daeaa-kube-api-access-4ct74\") pod \"calico-apiserver-b4879d89f-7plks\" (UID: \"dff6fc86-adaa-4cd8-878d-8a30632daeaa\") " pod="calico-apiserver/calico-apiserver-b4879d89f-7plks" Sep 4 20:29:59.961537 kubelet[2523]: E0904 20:29:59.953744 2523 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Sep 4 20:29:59.974650 kubelet[2523]: E0904 20:29:59.974589 2523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dff6fc86-adaa-4cd8-878d-8a30632daeaa-calico-apiserver-certs podName:dff6fc86-adaa-4cd8-878d-8a30632daeaa nodeName:}" failed. No retries permitted until 2024-09-04 20:30:00.460862471 +0000 UTC m=+81.303844550 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/dff6fc86-adaa-4cd8-878d-8a30632daeaa-calico-apiserver-certs") pod "calico-apiserver-b4879d89f-7plks" (UID: "dff6fc86-adaa-4cd8-878d-8a30632daeaa") : secret "calico-apiserver-certs" not found Sep 4 20:30:00.729060 containerd[1468]: time="2024-09-04T20:30:00.728965483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b4879d89f-7plks,Uid:dff6fc86-adaa-4cd8-878d-8a30632daeaa,Namespace:calico-apiserver,Attempt:0,}" Sep 4 20:30:00.990846 systemd-networkd[1370]: cali19c83ae7bff: Link UP Sep 4 20:30:01.004927 systemd-networkd[1370]: cali19c83ae7bff: Gained carrier Sep 4 20:30:01.023883 containerd[1468]: 2024-09-04 20:30:00.834 [INFO][4806] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.2.1--5--b3ba9b7107-k8s-calico--apiserver--b4879d89f--7plks-eth0 calico-apiserver-b4879d89f- calico-apiserver dff6fc86-adaa-4cd8-878d-8a30632daeaa 1095 0 2024-09-04 20:29:59 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:b4879d89f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3975.2.1-5-b3ba9b7107 calico-apiserver-b4879d89f-7plks eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali19c83ae7bff [] []}} ContainerID="e421aacb346d43346f1a19aa4b228c4b64a1320ad69e1f584172f41407412682" Namespace="calico-apiserver" Pod="calico-apiserver-b4879d89f-7plks" WorkloadEndpoint="ci--3975.2.1--5--b3ba9b7107-k8s-calico--apiserver--b4879d89f--7plks-" Sep 4 20:30:01.023883 containerd[1468]: 2024-09-04 20:30:00.835 [INFO][4806] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e421aacb346d43346f1a19aa4b228c4b64a1320ad69e1f584172f41407412682" Namespace="calico-apiserver" Pod="calico-apiserver-b4879d89f-7plks" WorkloadEndpoint="ci--3975.2.1--5--b3ba9b7107-k8s-calico--apiserver--b4879d89f--7plks-eth0" Sep 4 20:30:01.023883 containerd[1468]: 2024-09-04 20:30:00.898 [INFO][4818] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e421aacb346d43346f1a19aa4b228c4b64a1320ad69e1f584172f41407412682" HandleID="k8s-pod-network.e421aacb346d43346f1a19aa4b228c4b64a1320ad69e1f584172f41407412682" Workload="ci--3975.2.1--5--b3ba9b7107-k8s-calico--apiserver--b4879d89f--7plks-eth0" Sep 4 20:30:01.023883 containerd[1468]: 2024-09-04 20:30:00.912 [INFO][4818] ipam_plugin.go 270: Auto assigning IP ContainerID="e421aacb346d43346f1a19aa4b228c4b64a1320ad69e1f584172f41407412682" HandleID="k8s-pod-network.e421aacb346d43346f1a19aa4b228c4b64a1320ad69e1f584172f41407412682" Workload="ci--3975.2.1--5--b3ba9b7107-k8s-calico--apiserver--b4879d89f--7plks-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000300860), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3975.2.1-5-b3ba9b7107", "pod":"calico-apiserver-b4879d89f-7plks", "timestamp":"2024-09-04 20:30:00.89799615 +0000 UTC"}, Hostname:"ci-3975.2.1-5-b3ba9b7107", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 20:30:01.023883 containerd[1468]: 2024-09-04 20:30:00.913 [INFO][4818] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 20:30:01.023883 containerd[1468]: 2024-09-04 20:30:00.913 [INFO][4818] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 20:30:01.023883 containerd[1468]: 2024-09-04 20:30:00.913 [INFO][4818] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.2.1-5-b3ba9b7107' Sep 4 20:30:01.023883 containerd[1468]: 2024-09-04 20:30:00.916 [INFO][4818] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e421aacb346d43346f1a19aa4b228c4b64a1320ad69e1f584172f41407412682" host="ci-3975.2.1-5-b3ba9b7107" Sep 4 20:30:01.023883 containerd[1468]: 2024-09-04 20:30:00.930 [INFO][4818] ipam.go 372: Looking up existing affinities for host host="ci-3975.2.1-5-b3ba9b7107" Sep 4 20:30:01.023883 containerd[1468]: 2024-09-04 20:30:00.939 [INFO][4818] ipam.go 489: Trying affinity for 192.168.6.0/26 host="ci-3975.2.1-5-b3ba9b7107" Sep 4 20:30:01.023883 containerd[1468]: 2024-09-04 20:30:00.943 [INFO][4818] ipam.go 155: Attempting to load block cidr=192.168.6.0/26 host="ci-3975.2.1-5-b3ba9b7107" Sep 4 20:30:01.023883 containerd[1468]: 2024-09-04 20:30:00.948 [INFO][4818] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.6.0/26 host="ci-3975.2.1-5-b3ba9b7107" Sep 4 20:30:01.023883 containerd[1468]: 2024-09-04 20:30:00.948 [INFO][4818] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.6.0/26 handle="k8s-pod-network.e421aacb346d43346f1a19aa4b228c4b64a1320ad69e1f584172f41407412682" host="ci-3975.2.1-5-b3ba9b7107" Sep 4 20:30:01.023883 containerd[1468]: 2024-09-04 20:30:00.952 [INFO][4818] ipam.go 1685: Creating new handle: k8s-pod-network.e421aacb346d43346f1a19aa4b228c4b64a1320ad69e1f584172f41407412682 Sep 4 20:30:01.023883 containerd[1468]: 2024-09-04 20:30:00.963 [INFO][4818] ipam.go 1203: Writing block in order to claim IPs block=192.168.6.0/26 handle="k8s-pod-network.e421aacb346d43346f1a19aa4b228c4b64a1320ad69e1f584172f41407412682" host="ci-3975.2.1-5-b3ba9b7107" Sep 4 20:30:01.023883 containerd[1468]: 2024-09-04 20:30:00.978 [INFO][4818] ipam.go 1216: Successfully claimed IPs: [192.168.6.5/26] block=192.168.6.0/26 handle="k8s-pod-network.e421aacb346d43346f1a19aa4b228c4b64a1320ad69e1f584172f41407412682" host="ci-3975.2.1-5-b3ba9b7107" Sep 4 20:30:01.023883 containerd[1468]: 2024-09-04 20:30:00.978 [INFO][4818] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.6.5/26] handle="k8s-pod-network.e421aacb346d43346f1a19aa4b228c4b64a1320ad69e1f584172f41407412682" host="ci-3975.2.1-5-b3ba9b7107" Sep 4 20:30:01.023883 containerd[1468]: 2024-09-04 20:30:00.979 [INFO][4818] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 20:30:01.023883 containerd[1468]: 2024-09-04 20:30:00.979 [INFO][4818] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.6.5/26] IPv6=[] ContainerID="e421aacb346d43346f1a19aa4b228c4b64a1320ad69e1f584172f41407412682" HandleID="k8s-pod-network.e421aacb346d43346f1a19aa4b228c4b64a1320ad69e1f584172f41407412682" Workload="ci--3975.2.1--5--b3ba9b7107-k8s-calico--apiserver--b4879d89f--7plks-eth0" Sep 4 20:30:01.028553 containerd[1468]: 2024-09-04 20:30:00.985 [INFO][4806] k8s.go 386: Populated endpoint ContainerID="e421aacb346d43346f1a19aa4b228c4b64a1320ad69e1f584172f41407412682" Namespace="calico-apiserver" Pod="calico-apiserver-b4879d89f-7plks" WorkloadEndpoint="ci--3975.2.1--5--b3ba9b7107-k8s-calico--apiserver--b4879d89f--7plks-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--5--b3ba9b7107-k8s-calico--apiserver--b4879d89f--7plks-eth0", GenerateName:"calico-apiserver-b4879d89f-", Namespace:"calico-apiserver", SelfLink:"", UID:"dff6fc86-adaa-4cd8-878d-8a30632daeaa", ResourceVersion:"1095", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 20, 29, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b4879d89f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-5-b3ba9b7107", ContainerID:"", Pod:"calico-apiserver-b4879d89f-7plks", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.6.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali19c83ae7bff", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 20:30:01.028553 containerd[1468]: 2024-09-04 20:30:00.985 [INFO][4806] k8s.go 387: Calico CNI using IPs: [192.168.6.5/32] ContainerID="e421aacb346d43346f1a19aa4b228c4b64a1320ad69e1f584172f41407412682" Namespace="calico-apiserver" Pod="calico-apiserver-b4879d89f-7plks" WorkloadEndpoint="ci--3975.2.1--5--b3ba9b7107-k8s-calico--apiserver--b4879d89f--7plks-eth0" Sep 4 20:30:01.028553 containerd[1468]: 2024-09-04 20:30:00.985 [INFO][4806] dataplane_linux.go 68: Setting the host side veth name to cali19c83ae7bff ContainerID="e421aacb346d43346f1a19aa4b228c4b64a1320ad69e1f584172f41407412682" Namespace="calico-apiserver" Pod="calico-apiserver-b4879d89f-7plks" WorkloadEndpoint="ci--3975.2.1--5--b3ba9b7107-k8s-calico--apiserver--b4879d89f--7plks-eth0" Sep 4 20:30:01.028553 containerd[1468]: 2024-09-04 20:30:00.991 [INFO][4806] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="e421aacb346d43346f1a19aa4b228c4b64a1320ad69e1f584172f41407412682" Namespace="calico-apiserver" Pod="calico-apiserver-b4879d89f-7plks" WorkloadEndpoint="ci--3975.2.1--5--b3ba9b7107-k8s-calico--apiserver--b4879d89f--7plks-eth0" Sep 4 20:30:01.028553 containerd[1468]: 2024-09-04 20:30:00.993 [INFO][4806] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e421aacb346d43346f1a19aa4b228c4b64a1320ad69e1f584172f41407412682" Namespace="calico-apiserver" Pod="calico-apiserver-b4879d89f-7plks" WorkloadEndpoint="ci--3975.2.1--5--b3ba9b7107-k8s-calico--apiserver--b4879d89f--7plks-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--5--b3ba9b7107-k8s-calico--apiserver--b4879d89f--7plks-eth0", GenerateName:"calico-apiserver-b4879d89f-", Namespace:"calico-apiserver", SelfLink:"", UID:"dff6fc86-adaa-4cd8-878d-8a30632daeaa", ResourceVersion:"1095", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 20, 29, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b4879d89f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-5-b3ba9b7107", ContainerID:"e421aacb346d43346f1a19aa4b228c4b64a1320ad69e1f584172f41407412682", Pod:"calico-apiserver-b4879d89f-7plks", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.6.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali19c83ae7bff", MAC:"7a:ab:a3:d2:3d:ba", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 20:30:01.028553 containerd[1468]: 2024-09-04 20:30:01.017 [INFO][4806] k8s.go 500: Wrote updated endpoint to datastore ContainerID="e421aacb346d43346f1a19aa4b228c4b64a1320ad69e1f584172f41407412682" Namespace="calico-apiserver" Pod="calico-apiserver-b4879d89f-7plks" WorkloadEndpoint="ci--3975.2.1--5--b3ba9b7107-k8s-calico--apiserver--b4879d89f--7plks-eth0" Sep 4 20:30:01.094739 containerd[1468]: time="2024-09-04T20:30:01.092672540Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 20:30:01.094739 containerd[1468]: time="2024-09-04T20:30:01.092803523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 20:30:01.094739 containerd[1468]: time="2024-09-04T20:30:01.092847435Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 20:30:01.094739 containerd[1468]: time="2024-09-04T20:30:01.092895932Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 20:30:01.145415 systemd[1]: run-containerd-runc-k8s.io-e421aacb346d43346f1a19aa4b228c4b64a1320ad69e1f584172f41407412682-runc.1919aE.mount: Deactivated successfully. Sep 4 20:30:01.157851 systemd[1]: Started cri-containerd-e421aacb346d43346f1a19aa4b228c4b64a1320ad69e1f584172f41407412682.scope - libcontainer container e421aacb346d43346f1a19aa4b228c4b64a1320ad69e1f584172f41407412682. Sep 4 20:30:01.251643 containerd[1468]: time="2024-09-04T20:30:01.250604138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b4879d89f-7plks,Uid:dff6fc86-adaa-4cd8-878d-8a30632daeaa,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"e421aacb346d43346f1a19aa4b228c4b64a1320ad69e1f584172f41407412682\"" Sep 4 20:30:01.259207 containerd[1468]: time="2024-09-04T20:30:01.258491201Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Sep 4 20:30:01.555896 systemd[1]: Started sshd@18-143.198.146.52:22-139.178.68.195:47994.service - OpenSSH per-connection server daemon (139.178.68.195:47994). Sep 4 20:30:01.678420 sshd[4885]: Accepted publickey for core from 139.178.68.195 port 47994 ssh2: RSA SHA256:6m86ErQYPfwi49NZRVftW/USO9k3FxgPtHd71f+HMpY Sep 4 20:30:01.685018 sshd[4885]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 20:30:01.698464 systemd-logind[1447]: New session 19 of user core. Sep 4 20:30:01.703973 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 4 20:30:02.320028 systemd-networkd[1370]: cali19c83ae7bff: Gained IPv6LL Sep 4 20:30:02.407449 sshd[4885]: pam_unix(sshd:session): session closed for user core Sep 4 20:30:02.430471 systemd[1]: sshd@18-143.198.146.52:22-139.178.68.195:47994.service: Deactivated successfully. Sep 4 20:30:02.439600 systemd[1]: session-19.scope: Deactivated successfully. Sep 4 20:30:02.445800 systemd-logind[1447]: Session 19 logged out. Waiting for processes to exit. Sep 4 20:30:02.461774 systemd[1]: Started sshd@19-143.198.146.52:22-139.178.68.195:48004.service - OpenSSH per-connection server daemon (139.178.68.195:48004). Sep 4 20:30:02.475601 systemd-logind[1447]: Removed session 19. Sep 4 20:30:02.611491 sshd[4904]: Accepted publickey for core from 139.178.68.195 port 48004 ssh2: RSA SHA256:6m86ErQYPfwi49NZRVftW/USO9k3FxgPtHd71f+HMpY Sep 4 20:30:02.617028 sshd[4904]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 20:30:02.632022 systemd-logind[1447]: New session 20 of user core. Sep 4 20:30:02.641499 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 4 20:30:03.386138 sshd[4904]: pam_unix(sshd:session): session closed for user core Sep 4 20:30:03.414409 systemd[1]: Started sshd@20-143.198.146.52:22-139.178.68.195:48014.service - OpenSSH per-connection server daemon (139.178.68.195:48014). Sep 4 20:30:03.417503 systemd[1]: sshd@19-143.198.146.52:22-139.178.68.195:48004.service: Deactivated successfully. Sep 4 20:30:03.426736 systemd[1]: session-20.scope: Deactivated successfully. Sep 4 20:30:03.448449 systemd-logind[1447]: Session 20 logged out. Waiting for processes to exit. Sep 4 20:30:03.463406 systemd-logind[1447]: Removed session 20. Sep 4 20:30:03.628735 sshd[4913]: Accepted publickey for core from 139.178.68.195 port 48014 ssh2: RSA SHA256:6m86ErQYPfwi49NZRVftW/USO9k3FxgPtHd71f+HMpY Sep 4 20:30:03.639435 sshd[4913]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 20:30:03.661751 systemd-logind[1447]: New session 21 of user core. Sep 4 20:30:03.669689 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 4 20:30:05.369645 kubelet[2523]: E0904 20:30:05.369588 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 20:30:06.069998 sshd[4913]: pam_unix(sshd:session): session closed for user core Sep 4 20:30:06.092688 systemd[1]: Started sshd@21-143.198.146.52:22-139.178.68.195:48026.service - OpenSSH per-connection server daemon (139.178.68.195:48026). Sep 4 20:30:06.093993 systemd[1]: sshd@20-143.198.146.52:22-139.178.68.195:48014.service: Deactivated successfully. Sep 4 20:30:06.101649 systemd[1]: session-21.scope: Deactivated successfully. Sep 4 20:30:06.107723 systemd-logind[1447]: Session 21 logged out. Waiting for processes to exit. Sep 4 20:30:06.119522 systemd-logind[1447]: Removed session 21. Sep 4 20:30:06.246160 sshd[4936]: Accepted publickey for core from 139.178.68.195 port 48026 ssh2: RSA SHA256:6m86ErQYPfwi49NZRVftW/USO9k3FxgPtHd71f+HMpY Sep 4 20:30:06.243530 sshd[4936]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 20:30:06.266190 systemd-logind[1447]: New session 22 of user core. Sep 4 20:30:06.271420 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 4 20:30:07.133026 containerd[1468]: time="2024-09-04T20:30:07.132944977Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:30:07.139598 containerd[1468]: time="2024-09-04T20:30:07.139170248Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=40419849" Sep 4 20:30:07.144180 containerd[1468]: time="2024-09-04T20:30:07.141217610Z" level=info msg="ImageCreate event name:\"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:30:07.155119 containerd[1468]: time="2024-09-04T20:30:07.154141766Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:30:07.158685 containerd[1468]: time="2024-09-04T20:30:07.156301347Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"41912266\" in 5.897742234s" Sep 4 20:30:07.159924 containerd[1468]: time="2024-09-04T20:30:07.158930410Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\"" Sep 4 20:30:07.166173 containerd[1468]: time="2024-09-04T20:30:07.165798378Z" level=info msg="CreateContainer within sandbox \"e421aacb346d43346f1a19aa4b228c4b64a1320ad69e1f584172f41407412682\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 4 20:30:07.195666 containerd[1468]: time="2024-09-04T20:30:07.195570563Z" level=info msg="CreateContainer within sandbox \"e421aacb346d43346f1a19aa4b228c4b64a1320ad69e1f584172f41407412682\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a129eb195844c6ec70ec2dca51000df5037f367f6df1bb3862ef7780a2907b63\"" Sep 4 20:30:07.199310 containerd[1468]: time="2024-09-04T20:30:07.197423563Z" level=info msg="StartContainer for \"a129eb195844c6ec70ec2dca51000df5037f367f6df1bb3862ef7780a2907b63\"" Sep 4 20:30:07.307641 systemd[1]: Started cri-containerd-a129eb195844c6ec70ec2dca51000df5037f367f6df1bb3862ef7780a2907b63.scope - libcontainer container a129eb195844c6ec70ec2dca51000df5037f367f6df1bb3862ef7780a2907b63. Sep 4 20:30:07.633040 containerd[1468]: time="2024-09-04T20:30:07.632726513Z" level=info msg="StartContainer for \"a129eb195844c6ec70ec2dca51000df5037f367f6df1bb3862ef7780a2907b63\" returns successfully" Sep 4 20:30:07.970332 sshd[4936]: pam_unix(sshd:session): session closed for user core Sep 4 20:30:07.983659 systemd[1]: sshd@21-143.198.146.52:22-139.178.68.195:48026.service: Deactivated successfully. Sep 4 20:30:07.989544 systemd[1]: session-22.scope: Deactivated successfully. Sep 4 20:30:07.992314 systemd-logind[1447]: Session 22 logged out. Waiting for processes to exit. Sep 4 20:30:08.009138 systemd[1]: Started sshd@22-143.198.146.52:22-139.178.68.195:42774.service - OpenSSH per-connection server daemon (139.178.68.195:42774). Sep 4 20:30:08.018406 systemd-logind[1447]: Removed session 22. Sep 4 20:30:08.163286 kubelet[2523]: I0904 20:30:08.163215 2523 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-b4879d89f-7plks" podStartSLOduration=3.238199681 podCreationTimestamp="2024-09-04 20:29:59 +0000 UTC" firstStartedPulling="2024-09-04 20:30:01.257723919 +0000 UTC m=+82.100706017" lastFinishedPulling="2024-09-04 20:30:07.160409176 +0000 UTC m=+88.003391292" observedRunningTime="2024-09-04 20:30:08.140543737 +0000 UTC m=+88.983525833" watchObservedRunningTime="2024-09-04 20:30:08.140884956 +0000 UTC m=+88.983867045" Sep 4 20:30:08.217138 sshd[4993]: Accepted publickey for core from 139.178.68.195 port 42774 ssh2: RSA SHA256:6m86ErQYPfwi49NZRVftW/USO9k3FxgPtHd71f+HMpY Sep 4 20:30:08.222517 sshd[4993]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 20:30:08.242857 systemd-logind[1447]: New session 23 of user core. Sep 4 20:30:08.245409 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 4 20:30:08.472656 sshd[4993]: pam_unix(sshd:session): session closed for user core Sep 4 20:30:08.484306 systemd[1]: sshd@22-143.198.146.52:22-139.178.68.195:42774.service: Deactivated successfully. Sep 4 20:30:08.490865 systemd[1]: session-23.scope: Deactivated successfully. Sep 4 20:30:08.493076 systemd-logind[1447]: Session 23 logged out. Waiting for processes to exit. Sep 4 20:30:08.495836 systemd-logind[1447]: Removed session 23. Sep 4 20:30:11.313033 systemd[1]: run-containerd-runc-k8s.io-468b302734e18195ebe48bacb1f6be21a92bef9a6cebeb0d847df276275b448a-runc.R8vecq.mount: Deactivated successfully. Sep 4 20:30:13.491499 systemd[1]: Started sshd@23-143.198.146.52:22-139.178.68.195:42782.service - OpenSSH per-connection server daemon (139.178.68.195:42782). Sep 4 20:30:13.582446 sshd[5061]: Accepted publickey for core from 139.178.68.195 port 42782 ssh2: RSA SHA256:6m86ErQYPfwi49NZRVftW/USO9k3FxgPtHd71f+HMpY Sep 4 20:30:13.586379 sshd[5061]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 20:30:13.592714 systemd-logind[1447]: New session 24 of user core. Sep 4 20:30:13.605461 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 4 20:30:13.902153 sshd[5061]: pam_unix(sshd:session): session closed for user core Sep 4 20:30:13.907680 systemd[1]: sshd@23-143.198.146.52:22-139.178.68.195:42782.service: Deactivated successfully. Sep 4 20:30:13.911689 systemd[1]: session-24.scope: Deactivated successfully. Sep 4 20:30:13.912876 systemd-logind[1447]: Session 24 logged out. Waiting for processes to exit. Sep 4 20:30:13.914230 systemd-logind[1447]: Removed session 24. Sep 4 20:30:18.358136 kubelet[2523]: E0904 20:30:18.358041 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 20:30:18.926577 systemd[1]: Started sshd@24-143.198.146.52:22-139.178.68.195:39300.service - OpenSSH per-connection server daemon (139.178.68.195:39300). Sep 4 20:30:18.976454 sshd[5077]: Accepted publickey for core from 139.178.68.195 port 39300 ssh2: RSA SHA256:6m86ErQYPfwi49NZRVftW/USO9k3FxgPtHd71f+HMpY Sep 4 20:30:18.978460 sshd[5077]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 20:30:18.985219 systemd-logind[1447]: New session 25 of user core. Sep 4 20:30:18.989350 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 4 20:30:19.136714 sshd[5077]: pam_unix(sshd:session): session closed for user core Sep 4 20:30:19.142354 systemd[1]: sshd@24-143.198.146.52:22-139.178.68.195:39300.service: Deactivated successfully. Sep 4 20:30:19.144998 systemd[1]: session-25.scope: Deactivated successfully. Sep 4 20:30:19.147829 systemd-logind[1447]: Session 25 logged out. Waiting for processes to exit. Sep 4 20:30:19.150244 systemd-logind[1447]: Removed session 25. Sep 4 20:30:21.358104 kubelet[2523]: E0904 20:30:21.357520 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 20:30:23.359115 kubelet[2523]: E0904 20:30:23.358455 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 20:30:24.160465 systemd[1]: Started sshd@25-143.198.146.52:22-139.178.68.195:39308.service - OpenSSH per-connection server daemon (139.178.68.195:39308). Sep 4 20:30:24.215807 sshd[5097]: Accepted publickey for core from 139.178.68.195 port 39308 ssh2: RSA SHA256:6m86ErQYPfwi49NZRVftW/USO9k3FxgPtHd71f+HMpY Sep 4 20:30:24.219927 sshd[5097]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 20:30:24.226274 systemd-logind[1447]: New session 26 of user core. Sep 4 20:30:24.233470 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 4 20:30:24.381484 sshd[5097]: pam_unix(sshd:session): session closed for user core Sep 4 20:30:24.387541 systemd-logind[1447]: Session 26 logged out. Waiting for processes to exit. Sep 4 20:30:24.389035 systemd[1]: sshd@25-143.198.146.52:22-139.178.68.195:39308.service: Deactivated successfully. Sep 4 20:30:24.392050 systemd[1]: session-26.scope: Deactivated successfully. Sep 4 20:30:24.393657 systemd-logind[1447]: Removed session 26. Sep 4 20:30:29.406566 systemd[1]: Started sshd@26-143.198.146.52:22-139.178.68.195:59996.service - OpenSSH per-connection server daemon (139.178.68.195:59996). Sep 4 20:30:29.479974 sshd[5115]: Accepted publickey for core from 139.178.68.195 port 59996 ssh2: RSA SHA256:6m86ErQYPfwi49NZRVftW/USO9k3FxgPtHd71f+HMpY Sep 4 20:30:29.481571 sshd[5115]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 20:30:29.495334 systemd-logind[1447]: New session 27 of user core. Sep 4 20:30:29.504537 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 4 20:30:29.727726 sshd[5115]: pam_unix(sshd:session): session closed for user core Sep 4 20:30:29.739995 systemd[1]: sshd@26-143.198.146.52:22-139.178.68.195:59996.service: Deactivated successfully. Sep 4 20:30:29.743198 systemd[1]: session-27.scope: Deactivated successfully. Sep 4 20:30:29.745026 systemd-logind[1447]: Session 27 logged out. Waiting for processes to exit. Sep 4 20:30:29.746495 systemd-logind[1447]: Removed session 27. Sep 4 20:30:34.750553 systemd[1]: Started sshd@27-143.198.146.52:22-139.178.68.195:60008.service - OpenSSH per-connection server daemon (139.178.68.195:60008). Sep 4 20:30:34.832102 sshd[5134]: Accepted publickey for core from 139.178.68.195 port 60008 ssh2: RSA SHA256:6m86ErQYPfwi49NZRVftW/USO9k3FxgPtHd71f+HMpY Sep 4 20:30:34.834696 sshd[5134]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 20:30:34.844846 systemd-logind[1447]: New session 28 of user core. Sep 4 20:30:34.850337 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 4 20:30:35.053739 sshd[5134]: pam_unix(sshd:session): session closed for user core Sep 4 20:30:35.058817 systemd[1]: sshd@27-143.198.146.52:22-139.178.68.195:60008.service: Deactivated successfully. Sep 4 20:30:35.062752 systemd[1]: session-28.scope: Deactivated successfully. Sep 4 20:30:35.064360 systemd-logind[1447]: Session 28 logged out. Waiting for processes to exit. Sep 4 20:30:35.066024 systemd-logind[1447]: Removed session 28. Sep 4 20:30:39.359258 kubelet[2523]: E0904 20:30:39.358199 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 20:30:40.076775 systemd[1]: Started sshd@28-143.198.146.52:22-139.178.68.195:42188.service - OpenSSH per-connection server daemon (139.178.68.195:42188). Sep 4 20:30:40.170031 sshd[5170]: Accepted publickey for core from 139.178.68.195 port 42188 ssh2: RSA SHA256:6m86ErQYPfwi49NZRVftW/USO9k3FxgPtHd71f+HMpY Sep 4 20:30:40.173115 sshd[5170]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 20:30:40.179958 systemd-logind[1447]: New session 29 of user core. Sep 4 20:30:40.185427 systemd[1]: Started session-29.scope - Session 29 of User core. Sep 4 20:30:40.373442 sshd[5170]: pam_unix(sshd:session): session closed for user core Sep 4 20:30:40.380952 systemd-logind[1447]: Session 29 logged out. Waiting for processes to exit. Sep 4 20:30:40.381377 systemd[1]: sshd@28-143.198.146.52:22-139.178.68.195:42188.service: Deactivated successfully. Sep 4 20:30:40.384252 systemd[1]: session-29.scope: Deactivated successfully. Sep 4 20:30:40.385722 systemd-logind[1447]: Removed session 29.