Sep 4 20:27:53.910274 kernel: Linux version 6.6.48-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Wed Sep 4 15:49:08 -00 2024 Sep 4 20:27:53.910300 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=6662bd39fec77da4c9a5c59d2cba257325976309ed96904c83697df1825085bf Sep 4 20:27:53.910313 kernel: BIOS-provided physical RAM map: Sep 4 20:27:53.910320 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 4 20:27:53.910326 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 4 20:27:53.910333 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 4 20:27:53.910340 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Sep 4 20:27:53.910347 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Sep 4 20:27:53.910354 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 4 20:27:53.910363 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 4 20:27:53.910370 kernel: NX (Execute Disable) protection: active Sep 4 20:27:53.910377 kernel: APIC: Static calls initialized Sep 4 20:27:53.910384 kernel: SMBIOS 2.8 present. Sep 4 20:27:53.910391 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Sep 4 20:27:53.910399 kernel: Hypervisor detected: KVM Sep 4 20:27:53.910409 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 4 20:27:53.910417 kernel: kvm-clock: using sched offset of 3110180296 cycles Sep 4 20:27:53.910430 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 4 20:27:53.910439 kernel: tsc: Detected 2494.140 MHz processor Sep 4 20:27:53.910447 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 4 20:27:53.910457 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 4 20:27:53.910465 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Sep 4 20:27:53.910473 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 4 20:27:53.910480 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 4 20:27:53.910491 kernel: ACPI: Early table checksum verification disabled Sep 4 20:27:53.910499 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Sep 4 20:27:53.910507 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 20:27:53.910514 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 20:27:53.910522 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 20:27:53.910530 kernel: ACPI: FACS 0x000000007FFE0000 000040 Sep 4 20:27:53.910538 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 20:27:53.910545 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 20:27:53.910553 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 20:27:53.910563 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 20:27:53.910570 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Sep 4 20:27:53.910578 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Sep 4 20:27:53.910585 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Sep 4 20:27:53.910593 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Sep 4 20:27:53.910600 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Sep 4 20:27:53.910608 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Sep 4 20:27:53.910622 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Sep 4 20:27:53.910630 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Sep 4 20:27:53.910638 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Sep 4 20:27:53.910646 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Sep 4 20:27:53.910656 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Sep 4 20:27:53.910669 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Sep 4 20:27:53.910680 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Sep 4 20:27:53.910692 kernel: Zone ranges: Sep 4 20:27:53.910700 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 4 20:27:53.910708 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Sep 4 20:27:53.910731 kernel: Normal empty Sep 4 20:27:53.910743 kernel: Movable zone start for each node Sep 4 20:27:53.910774 kernel: Early memory node ranges Sep 4 20:27:53.910783 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 4 20:27:53.910791 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Sep 4 20:27:53.910799 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Sep 4 20:27:53.910811 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 4 20:27:53.910820 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 4 20:27:53.910828 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Sep 4 20:27:53.910836 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 4 20:27:53.910844 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 4 20:27:53.910852 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 4 20:27:53.910860 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 4 20:27:53.910868 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 4 20:27:53.910877 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 4 20:27:53.910889 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 4 20:27:53.910897 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 4 20:27:53.910908 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 4 20:27:53.910916 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 4 20:27:53.910929 kernel: TSC deadline timer available Sep 4 20:27:53.910940 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Sep 4 20:27:53.910952 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 4 20:27:53.910962 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Sep 4 20:27:53.910970 kernel: Booting paravirtualized kernel on KVM Sep 4 20:27:53.910981 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 4 20:27:53.910993 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 4 20:27:53.911003 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576 Sep 4 20:27:53.911016 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152 Sep 4 20:27:53.911027 kernel: pcpu-alloc: [0] 0 1 Sep 4 20:27:53.911038 kernel: kvm-guest: PV spinlocks disabled, no host support Sep 4 20:27:53.911050 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=6662bd39fec77da4c9a5c59d2cba257325976309ed96904c83697df1825085bf Sep 4 20:27:53.911062 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 4 20:27:53.911077 kernel: random: crng init done Sep 4 20:27:53.911088 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 4 20:27:53.911099 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 4 20:27:53.911110 kernel: Fallback order for Node 0: 0 Sep 4 20:27:53.911123 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Sep 4 20:27:53.911134 kernel: Policy zone: DMA32 Sep 4 20:27:53.911147 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 4 20:27:53.911161 kernel: Memory: 1965060K/2096612K available (12288K kernel code, 2303K rwdata, 22640K rodata, 49336K init, 2008K bss, 131292K reserved, 0K cma-reserved) Sep 4 20:27:53.911174 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 4 20:27:53.911190 kernel: Kernel/User page tables isolation: enabled Sep 4 20:27:53.911198 kernel: ftrace: allocating 37670 entries in 148 pages Sep 4 20:27:53.911206 kernel: ftrace: allocated 148 pages with 3 groups Sep 4 20:27:53.911214 kernel: Dynamic Preempt: voluntary Sep 4 20:27:53.911223 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 4 20:27:53.911232 kernel: rcu: RCU event tracing is enabled. Sep 4 20:27:53.911240 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 4 20:27:53.911249 kernel: Trampoline variant of Tasks RCU enabled. Sep 4 20:27:53.911257 kernel: Rude variant of Tasks RCU enabled. Sep 4 20:27:53.911265 kernel: Tracing variant of Tasks RCU enabled. Sep 4 20:27:53.911277 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 4 20:27:53.911285 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 4 20:27:53.911293 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Sep 4 20:27:53.911301 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 4 20:27:53.911310 kernel: Console: colour VGA+ 80x25 Sep 4 20:27:53.911318 kernel: printk: console [tty0] enabled Sep 4 20:27:53.911326 kernel: printk: console [ttyS0] enabled Sep 4 20:27:53.911335 kernel: ACPI: Core revision 20230628 Sep 4 20:27:53.911343 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 4 20:27:53.911354 kernel: APIC: Switch to symmetric I/O mode setup Sep 4 20:27:53.911362 kernel: x2apic enabled Sep 4 20:27:53.911371 kernel: APIC: Switched APIC routing to: physical x2apic Sep 4 20:27:53.911383 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 4 20:27:53.911392 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Sep 4 20:27:53.911409 kernel: Calibrating delay loop (skipped) preset value.. 4988.28 BogoMIPS (lpj=2494140) Sep 4 20:27:53.911418 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Sep 4 20:27:53.911426 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Sep 4 20:27:53.911446 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 4 20:27:53.911455 kernel: Spectre V2 : Mitigation: Retpolines Sep 4 20:27:53.911463 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Sep 4 20:27:53.911475 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Sep 4 20:27:53.911484 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Sep 4 20:27:53.911492 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 4 20:27:53.911502 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 4 20:27:53.911510 kernel: MDS: Mitigation: Clear CPU buffers Sep 4 20:27:53.911520 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 4 20:27:53.911534 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 4 20:27:53.911543 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 4 20:27:53.911552 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 4 20:27:53.911560 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 4 20:27:53.911569 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Sep 4 20:27:53.911578 kernel: Freeing SMP alternatives memory: 32K Sep 4 20:27:53.911587 kernel: pid_max: default: 32768 minimum: 301 Sep 4 20:27:53.911595 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Sep 4 20:27:53.911607 kernel: SELinux: Initializing. Sep 4 20:27:53.911616 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 4 20:27:53.911625 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 4 20:27:53.911634 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Sep 4 20:27:53.911643 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Sep 4 20:27:53.911652 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Sep 4 20:27:53.911661 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Sep 4 20:27:53.911669 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Sep 4 20:27:53.911678 kernel: signal: max sigframe size: 1776 Sep 4 20:27:53.911691 kernel: rcu: Hierarchical SRCU implementation. Sep 4 20:27:53.911699 kernel: rcu: Max phase no-delay instances is 400. Sep 4 20:27:53.911708 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 4 20:27:53.911717 kernel: smp: Bringing up secondary CPUs ... Sep 4 20:27:53.911725 kernel: smpboot: x86: Booting SMP configuration: Sep 4 20:27:53.911734 kernel: .... node #0, CPUs: #1 Sep 4 20:27:53.911742 kernel: smp: Brought up 1 node, 2 CPUs Sep 4 20:27:53.911778 kernel: smpboot: Max logical packages: 1 Sep 4 20:27:53.911787 kernel: smpboot: Total of 2 processors activated (9976.56 BogoMIPS) Sep 4 20:27:53.911799 kernel: devtmpfs: initialized Sep 4 20:27:53.911807 kernel: x86/mm: Memory block size: 128MB Sep 4 20:27:53.911816 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 4 20:27:53.911825 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 4 20:27:53.911834 kernel: pinctrl core: initialized pinctrl subsystem Sep 4 20:27:53.911842 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 4 20:27:53.911851 kernel: audit: initializing netlink subsys (disabled) Sep 4 20:27:53.911863 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 4 20:27:53.911877 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 4 20:27:53.911893 kernel: audit: type=2000 audit(1725481673.195:1): state=initialized audit_enabled=0 res=1 Sep 4 20:27:53.911902 kernel: cpuidle: using governor menu Sep 4 20:27:53.911911 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 4 20:27:53.911920 kernel: dca service started, version 1.12.1 Sep 4 20:27:53.911928 kernel: PCI: Using configuration type 1 for base access Sep 4 20:27:53.911937 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 4 20:27:53.911946 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 4 20:27:53.911959 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 4 20:27:53.911971 kernel: ACPI: Added _OSI(Module Device) Sep 4 20:27:53.911984 kernel: ACPI: Added _OSI(Processor Device) Sep 4 20:27:53.911993 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Sep 4 20:27:53.912009 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 4 20:27:53.912019 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 4 20:27:53.912027 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 4 20:27:53.912036 kernel: ACPI: Interpreter enabled Sep 4 20:27:53.912044 kernel: ACPI: PM: (supports S0 S5) Sep 4 20:27:53.912053 kernel: ACPI: Using IOAPIC for interrupt routing Sep 4 20:27:53.912062 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 4 20:27:53.912073 kernel: PCI: Using E820 reservations for host bridge windows Sep 4 20:27:53.912082 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Sep 4 20:27:53.912091 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 4 20:27:53.912310 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Sep 4 20:27:53.912417 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Sep 4 20:27:53.912509 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Sep 4 20:27:53.912521 kernel: acpiphp: Slot [3] registered Sep 4 20:27:53.912534 kernel: acpiphp: Slot [4] registered Sep 4 20:27:53.912542 kernel: acpiphp: Slot [5] registered Sep 4 20:27:53.912551 kernel: acpiphp: Slot [6] registered Sep 4 20:27:53.912560 kernel: acpiphp: Slot [7] registered Sep 4 20:27:53.912568 kernel: acpiphp: Slot [8] registered Sep 4 20:27:53.912577 kernel: acpiphp: Slot [9] registered Sep 4 20:27:53.912586 kernel: acpiphp: Slot [10] registered Sep 4 20:27:53.912595 kernel: acpiphp: Slot [11] registered Sep 4 20:27:53.912604 kernel: acpiphp: Slot [12] registered Sep 4 20:27:53.912613 kernel: acpiphp: Slot [13] registered Sep 4 20:27:53.912624 kernel: acpiphp: Slot [14] registered Sep 4 20:27:53.912633 kernel: acpiphp: Slot [15] registered Sep 4 20:27:53.912642 kernel: acpiphp: Slot [16] registered Sep 4 20:27:53.912651 kernel: acpiphp: Slot [17] registered Sep 4 20:27:53.912659 kernel: acpiphp: Slot [18] registered Sep 4 20:27:53.912668 kernel: acpiphp: Slot [19] registered Sep 4 20:27:53.912676 kernel: acpiphp: Slot [20] registered Sep 4 20:27:53.912685 kernel: acpiphp: Slot [21] registered Sep 4 20:27:53.912694 kernel: acpiphp: Slot [22] registered Sep 4 20:27:53.912706 kernel: acpiphp: Slot [23] registered Sep 4 20:27:53.912715 kernel: acpiphp: Slot [24] registered Sep 4 20:27:53.912723 kernel: acpiphp: Slot [25] registered Sep 4 20:27:53.912732 kernel: acpiphp: Slot [26] registered Sep 4 20:27:53.912740 kernel: acpiphp: Slot [27] registered Sep 4 20:27:53.912765 kernel: acpiphp: Slot [28] registered Sep 4 20:27:53.912774 kernel: acpiphp: Slot [29] registered Sep 4 20:27:53.912783 kernel: acpiphp: Slot [30] registered Sep 4 20:27:53.912792 kernel: acpiphp: Slot [31] registered Sep 4 20:27:53.912804 kernel: PCI host bridge to bus 0000:00 Sep 4 20:27:53.912921 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 4 20:27:53.913014 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 4 20:27:53.913101 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 4 20:27:53.913185 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Sep 4 20:27:53.913268 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Sep 4 20:27:53.913351 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 4 20:27:53.913474 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Sep 4 20:27:53.913594 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Sep 4 20:27:53.913737 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Sep 4 20:27:53.913858 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Sep 4 20:27:53.913952 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Sep 4 20:27:53.914046 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Sep 4 20:27:53.914140 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Sep 4 20:27:53.914253 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Sep 4 20:27:53.914426 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Sep 4 20:27:53.914582 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Sep 4 20:27:53.914700 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Sep 4 20:27:53.914854 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Sep 4 20:27:53.914952 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Sep 4 20:27:53.915083 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Sep 4 20:27:53.915179 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Sep 4 20:27:53.915273 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Sep 4 20:27:53.915369 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Sep 4 20:27:53.915465 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Sep 4 20:27:53.915560 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 4 20:27:53.915666 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Sep 4 20:27:53.915816 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Sep 4 20:27:53.915933 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Sep 4 20:27:53.916026 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Sep 4 20:27:53.916132 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 4 20:27:53.916227 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Sep 4 20:27:53.916323 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Sep 4 20:27:53.916416 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Sep 4 20:27:53.916525 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Sep 4 20:27:53.916619 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Sep 4 20:27:53.916711 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Sep 4 20:27:53.916815 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Sep 4 20:27:53.916915 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Sep 4 20:27:53.917032 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Sep 4 20:27:53.917133 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Sep 4 20:27:53.917225 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Sep 4 20:27:53.917330 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Sep 4 20:27:53.917424 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Sep 4 20:27:53.917560 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Sep 4 20:27:53.917682 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Sep 4 20:27:53.917822 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Sep 4 20:27:53.917930 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Sep 4 20:27:53.918039 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Sep 4 20:27:53.918052 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 4 20:27:53.918061 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 4 20:27:53.918070 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 4 20:27:53.918079 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 4 20:27:53.918088 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Sep 4 20:27:53.918097 kernel: iommu: Default domain type: Translated Sep 4 20:27:53.918110 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 4 20:27:53.918119 kernel: PCI: Using ACPI for IRQ routing Sep 4 20:27:53.918127 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 4 20:27:53.918136 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 4 20:27:53.918145 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Sep 4 20:27:53.918243 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Sep 4 20:27:53.918337 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Sep 4 20:27:53.918449 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 4 20:27:53.918465 kernel: vgaarb: loaded Sep 4 20:27:53.918474 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 4 20:27:53.918483 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 4 20:27:53.918492 kernel: clocksource: Switched to clocksource kvm-clock Sep 4 20:27:53.918502 kernel: VFS: Disk quotas dquot_6.6.0 Sep 4 20:27:53.918511 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 4 20:27:53.918520 kernel: pnp: PnP ACPI init Sep 4 20:27:53.918530 kernel: pnp: PnP ACPI: found 4 devices Sep 4 20:27:53.918539 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 4 20:27:53.918550 kernel: NET: Registered PF_INET protocol family Sep 4 20:27:53.918559 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 4 20:27:53.918568 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 4 20:27:53.918577 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 4 20:27:53.918586 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 4 20:27:53.918596 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 4 20:27:53.918604 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 4 20:27:53.918613 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 4 20:27:53.918622 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 4 20:27:53.918633 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 4 20:27:53.918642 kernel: NET: Registered PF_XDP protocol family Sep 4 20:27:53.918852 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 4 20:27:53.918946 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 4 20:27:53.919030 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 4 20:27:53.919113 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Sep 4 20:27:53.919226 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Sep 4 20:27:53.919352 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Sep 4 20:27:53.919458 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 4 20:27:53.919471 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Sep 4 20:27:53.919566 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7b0 took 30747 usecs Sep 4 20:27:53.919578 kernel: PCI: CLS 0 bytes, default 64 Sep 4 20:27:53.919588 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 4 20:27:53.919597 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Sep 4 20:27:53.919605 kernel: Initialise system trusted keyrings Sep 4 20:27:53.919614 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 4 20:27:53.919627 kernel: Key type asymmetric registered Sep 4 20:27:53.919636 kernel: Asymmetric key parser 'x509' registered Sep 4 20:27:53.919645 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 4 20:27:53.919653 kernel: io scheduler mq-deadline registered Sep 4 20:27:53.919662 kernel: io scheduler kyber registered Sep 4 20:27:53.919671 kernel: io scheduler bfq registered Sep 4 20:27:53.919686 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 4 20:27:53.919698 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Sep 4 20:27:53.919710 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Sep 4 20:27:53.919723 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Sep 4 20:27:53.919741 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 4 20:27:53.919887 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 4 20:27:53.919902 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 4 20:27:53.919916 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 4 20:27:53.919928 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 4 20:27:53.920140 kernel: rtc_cmos 00:03: RTC can wake from S4 Sep 4 20:27:53.920156 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 4 20:27:53.920244 kernel: rtc_cmos 00:03: registered as rtc0 Sep 4 20:27:53.920339 kernel: rtc_cmos 00:03: setting system clock to 2024-09-04T20:27:53 UTC (1725481673) Sep 4 20:27:53.920425 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Sep 4 20:27:53.920437 kernel: intel_pstate: CPU model not supported Sep 4 20:27:53.920446 kernel: NET: Registered PF_INET6 protocol family Sep 4 20:27:53.920455 kernel: Segment Routing with IPv6 Sep 4 20:27:53.920464 kernel: In-situ OAM (IOAM) with IPv6 Sep 4 20:27:53.920473 kernel: NET: Registered PF_PACKET protocol family Sep 4 20:27:53.920482 kernel: Key type dns_resolver registered Sep 4 20:27:53.920493 kernel: IPI shorthand broadcast: enabled Sep 4 20:27:53.920502 kernel: sched_clock: Marking stable (876003492, 118630548)->(1094309789, -99675749) Sep 4 20:27:53.920512 kernel: registered taskstats version 1 Sep 4 20:27:53.920521 kernel: Loading compiled-in X.509 certificates Sep 4 20:27:53.920530 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.48-flatcar: a53bb4e7e3319f75620f709d8a6c7aef0adb3b02' Sep 4 20:27:53.920539 kernel: Key type .fscrypt registered Sep 4 20:27:53.920547 kernel: Key type fscrypt-provisioning registered Sep 4 20:27:53.920557 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 4 20:27:53.920566 kernel: ima: Allocated hash algorithm: sha1 Sep 4 20:27:53.920578 kernel: ima: No architecture policies found Sep 4 20:27:53.920587 kernel: clk: Disabling unused clocks Sep 4 20:27:53.920595 kernel: Freeing unused kernel image (initmem) memory: 49336K Sep 4 20:27:53.920604 kernel: Write protecting the kernel read-only data: 36864k Sep 4 20:27:53.920613 kernel: Freeing unused kernel image (rodata/data gap) memory: 1936K Sep 4 20:27:53.920640 kernel: Run /init as init process Sep 4 20:27:53.920652 kernel: with arguments: Sep 4 20:27:53.920662 kernel: /init Sep 4 20:27:53.920674 kernel: with environment: Sep 4 20:27:53.920690 kernel: HOME=/ Sep 4 20:27:53.920702 kernel: TERM=linux Sep 4 20:27:53.920716 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 4 20:27:53.920734 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 4 20:27:53.922789 systemd[1]: Detected virtualization kvm. Sep 4 20:27:53.922827 systemd[1]: Detected architecture x86-64. Sep 4 20:27:53.922843 systemd[1]: Running in initrd. Sep 4 20:27:53.922856 systemd[1]: No hostname configured, using default hostname. Sep 4 20:27:53.922879 systemd[1]: Hostname set to . Sep 4 20:27:53.922891 systemd[1]: Initializing machine ID from VM UUID. Sep 4 20:27:53.922901 systemd[1]: Queued start job for default target initrd.target. Sep 4 20:27:53.922911 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 20:27:53.922920 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 20:27:53.922931 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 4 20:27:53.922941 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 20:27:53.922951 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 4 20:27:53.922964 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 4 20:27:53.922975 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 4 20:27:53.922985 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 4 20:27:53.922995 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 20:27:53.923005 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 20:27:53.923014 systemd[1]: Reached target paths.target - Path Units. Sep 4 20:27:53.923027 systemd[1]: Reached target slices.target - Slice Units. Sep 4 20:27:53.923037 systemd[1]: Reached target swap.target - Swaps. Sep 4 20:27:53.923047 systemd[1]: Reached target timers.target - Timer Units. Sep 4 20:27:53.923059 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 20:27:53.923069 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 20:27:53.923078 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 4 20:27:53.923090 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 4 20:27:53.923101 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 20:27:53.923110 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 20:27:53.923120 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 20:27:53.923129 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 20:27:53.923139 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 4 20:27:53.923149 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 20:27:53.923159 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 4 20:27:53.923171 systemd[1]: Starting systemd-fsck-usr.service... Sep 4 20:27:53.923181 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 20:27:53.923190 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 20:27:53.923200 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 20:27:53.923209 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 4 20:27:53.923257 systemd-journald[182]: Collecting audit messages is disabled. Sep 4 20:27:53.923284 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 20:27:53.923293 systemd[1]: Finished systemd-fsck-usr.service. Sep 4 20:27:53.923304 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 20:27:53.923317 systemd-journald[182]: Journal started Sep 4 20:27:53.923338 systemd-journald[182]: Runtime Journal (/run/log/journal/d5eae7c9d6714d9fa77148bc8a835cee) is 4.9M, max 39.3M, 34.4M free. Sep 4 20:27:53.930879 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 20:27:53.947210 systemd-modules-load[183]: Inserted module 'overlay' Sep 4 20:27:53.951082 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Sep 4 20:27:53.978465 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 20:27:53.981775 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 4 20:27:53.983053 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 20:27:53.990348 kernel: Bridge firewalling registered Sep 4 20:27:53.989180 systemd-modules-load[183]: Inserted module 'br_netfilter' Sep 4 20:27:53.995092 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 20:27:54.002042 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 20:27:54.003423 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 20:27:54.004551 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Sep 4 20:27:54.014053 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 20:27:54.022154 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 20:27:54.024615 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 20:27:54.033005 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 4 20:27:54.034543 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 20:27:54.038984 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 20:27:54.048989 dracut-cmdline[214]: dracut-dracut-053 Sep 4 20:27:54.054353 dracut-cmdline[214]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=6662bd39fec77da4c9a5c59d2cba257325976309ed96904c83697df1825085bf Sep 4 20:27:54.089645 systemd-resolved[218]: Positive Trust Anchors: Sep 4 20:27:54.089913 systemd-resolved[218]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 20:27:54.089988 systemd-resolved[218]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Sep 4 20:27:54.095934 systemd-resolved[218]: Defaulting to hostname 'linux'. Sep 4 20:27:54.098061 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 20:27:54.098525 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 20:27:54.170843 kernel: SCSI subsystem initialized Sep 4 20:27:54.181790 kernel: Loading iSCSI transport class v2.0-870. Sep 4 20:27:54.195833 kernel: iscsi: registered transport (tcp) Sep 4 20:27:54.222794 kernel: iscsi: registered transport (qla4xxx) Sep 4 20:27:54.222868 kernel: QLogic iSCSI HBA Driver Sep 4 20:27:54.273570 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 4 20:27:54.278974 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 4 20:27:54.317925 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 4 20:27:54.318033 kernel: device-mapper: uevent: version 1.0.3 Sep 4 20:27:54.319283 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 4 20:27:54.367827 kernel: raid6: avx2x4 gen() 16697 MB/s Sep 4 20:27:54.384811 kernel: raid6: avx2x2 gen() 16488 MB/s Sep 4 20:27:54.402536 kernel: raid6: avx2x1 gen() 11520 MB/s Sep 4 20:27:54.402636 kernel: raid6: using algorithm avx2x4 gen() 16697 MB/s Sep 4 20:27:54.420236 kernel: raid6: .... xor() 5764 MB/s, rmw enabled Sep 4 20:27:54.420344 kernel: raid6: using avx2x2 recovery algorithm Sep 4 20:27:54.449803 kernel: xor: automatically using best checksumming function avx Sep 4 20:27:54.655800 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 4 20:27:54.672163 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 4 20:27:54.681071 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 20:27:54.698483 systemd-udevd[401]: Using default interface naming scheme 'v255'. Sep 4 20:27:54.706045 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 20:27:54.714536 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 4 20:27:54.735389 dracut-pre-trigger[406]: rd.md=0: removing MD RAID activation Sep 4 20:27:54.779877 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 20:27:54.791274 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 20:27:54.873893 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 20:27:54.884127 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 4 20:27:54.903631 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 4 20:27:54.913894 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 20:27:54.915743 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 20:27:54.916779 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 20:27:54.925290 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 4 20:27:54.950233 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 4 20:27:54.997021 kernel: cryptd: max_cpu_qlen set to 1000 Sep 4 20:27:55.000050 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Sep 4 20:27:55.023817 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Sep 4 20:27:55.059293 kernel: scsi host0: Virtio SCSI HBA Sep 4 20:27:55.059330 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 4 20:27:55.059344 kernel: GPT:9289727 != 125829119 Sep 4 20:27:55.059356 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 4 20:27:55.059368 kernel: GPT:9289727 != 125829119 Sep 4 20:27:55.059379 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 4 20:27:55.059398 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 20:27:55.059410 kernel: AVX2 version of gcm_enc/dec engaged. Sep 4 20:27:55.059421 kernel: libata version 3.00 loaded. Sep 4 20:27:55.065777 kernel: AES CTR mode by8 optimization enabled Sep 4 20:27:55.067381 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 20:27:55.069545 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 20:27:55.071564 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 20:27:55.073064 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 20:27:55.073245 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 20:27:55.073688 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 20:27:55.082118 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Sep 4 20:27:55.082573 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 20:27:55.084949 kernel: virtio_blk virtio5: [vdb] 968 512-byte logical blocks (496 kB/484 KiB) Sep 4 20:27:55.090158 kernel: ata_piix 0000:00:01.1: version 2.13 Sep 4 20:27:55.127833 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (452) Sep 4 20:27:55.127906 kernel: BTRFS: device fsid d110be6f-93a3-451a-b365-11b5d04e0602 devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (447) Sep 4 20:27:55.139786 kernel: scsi host1: ata_piix Sep 4 20:27:55.146902 kernel: scsi host2: ata_piix Sep 4 20:27:55.147146 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Sep 4 20:27:55.147162 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Sep 4 20:27:55.158780 kernel: ACPI: bus type USB registered Sep 4 20:27:55.158862 kernel: usbcore: registered new interface driver usbfs Sep 4 20:27:55.158884 kernel: usbcore: registered new interface driver hub Sep 4 20:27:55.158900 kernel: usbcore: registered new device driver usb Sep 4 20:27:55.161155 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 4 20:27:55.199418 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 20:27:55.205747 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 4 20:27:55.216230 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 4 20:27:55.216854 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 4 20:27:55.228126 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 4 20:27:55.234997 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 4 20:27:55.237983 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 20:27:55.253859 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 20:27:55.255781 disk-uuid[530]: Primary Header is updated. Sep 4 20:27:55.255781 disk-uuid[530]: Secondary Entries is updated. Sep 4 20:27:55.255781 disk-uuid[530]: Secondary Header is updated. Sep 4 20:27:55.280996 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 20:27:55.374863 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Sep 4 20:27:55.375149 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Sep 4 20:27:55.377914 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Sep 4 20:27:55.378219 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Sep 4 20:27:55.380797 kernel: hub 1-0:1.0: USB hub found Sep 4 20:27:55.381074 kernel: hub 1-0:1.0: 2 ports detected Sep 4 20:27:56.272887 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 20:27:56.273957 disk-uuid[532]: The operation has completed successfully. Sep 4 20:27:56.322809 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 4 20:27:56.322937 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 4 20:27:56.333498 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 4 20:27:56.336830 sh[562]: Success Sep 4 20:27:56.357856 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 4 20:27:56.436196 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 4 20:27:56.437906 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 4 20:27:56.439204 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 4 20:27:56.470166 kernel: BTRFS info (device dm-0): first mount of filesystem d110be6f-93a3-451a-b365-11b5d04e0602 Sep 4 20:27:56.470343 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 4 20:27:56.472661 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 4 20:27:56.472732 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 4 20:27:56.474037 kernel: BTRFS info (device dm-0): using free space tree Sep 4 20:27:56.482654 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 4 20:27:56.484532 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 4 20:27:56.491053 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 4 20:27:56.495058 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 4 20:27:56.513702 kernel: BTRFS info (device vda6): first mount of filesystem 50e7422b-f0c7-4536-902a-3ab4c864240b Sep 4 20:27:56.513815 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 20:27:56.513841 kernel: BTRFS info (device vda6): using free space tree Sep 4 20:27:56.518822 kernel: BTRFS info (device vda6): auto enabling async discard Sep 4 20:27:56.533299 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 4 20:27:56.536875 kernel: BTRFS info (device vda6): last unmount of filesystem 50e7422b-f0c7-4536-902a-3ab4c864240b Sep 4 20:27:56.544697 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 4 20:27:56.552067 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 4 20:27:56.640776 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 20:27:56.656148 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 20:27:56.692881 systemd-networkd[748]: lo: Link UP Sep 4 20:27:56.692889 systemd-networkd[748]: lo: Gained carrier Sep 4 20:27:56.696247 systemd-networkd[748]: Enumeration completed Sep 4 20:27:56.696402 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 20:27:56.697954 systemd-networkd[748]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Sep 4 20:27:56.697962 systemd-networkd[748]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Sep 4 20:27:56.699538 systemd[1]: Reached target network.target - Network. Sep 4 20:27:56.699668 systemd-networkd[748]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 20:27:56.699672 systemd-networkd[748]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 20:27:56.701192 systemd-networkd[748]: eth0: Link UP Sep 4 20:27:56.701199 systemd-networkd[748]: eth0: Gained carrier Sep 4 20:27:56.701211 systemd-networkd[748]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Sep 4 20:27:56.703506 systemd-networkd[748]: eth1: Link UP Sep 4 20:27:56.703512 systemd-networkd[748]: eth1: Gained carrier Sep 4 20:27:56.703529 systemd-networkd[748]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 20:27:56.718681 ignition[656]: Ignition 2.18.0 Sep 4 20:27:56.718692 ignition[656]: Stage: fetch-offline Sep 4 20:27:56.718827 ignition[656]: no configs at "/usr/lib/ignition/base.d" Sep 4 20:27:56.718845 ignition[656]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 4 20:27:56.721049 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 20:27:56.719013 ignition[656]: parsed url from cmdline: "" Sep 4 20:27:56.721635 systemd-networkd[748]: eth0: DHCPv4 address 64.23.130.28/20, gateway 64.23.128.1 acquired from 169.254.169.253 Sep 4 20:27:56.719017 ignition[656]: no config URL provided Sep 4 20:27:56.724901 systemd-networkd[748]: eth1: DHCPv4 address 10.124.0.5/20 acquired from 169.254.169.253 Sep 4 20:27:56.719023 ignition[656]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 20:27:56.719033 ignition[656]: no config at "/usr/lib/ignition/user.ign" Sep 4 20:27:56.719040 ignition[656]: failed to fetch config: resource requires networking Sep 4 20:27:56.719249 ignition[656]: Ignition finished successfully Sep 4 20:27:56.733073 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 4 20:27:56.752824 ignition[758]: Ignition 2.18.0 Sep 4 20:27:56.752841 ignition[758]: Stage: fetch Sep 4 20:27:56.753131 ignition[758]: no configs at "/usr/lib/ignition/base.d" Sep 4 20:27:56.753148 ignition[758]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 4 20:27:56.753329 ignition[758]: parsed url from cmdline: "" Sep 4 20:27:56.753334 ignition[758]: no config URL provided Sep 4 20:27:56.753343 ignition[758]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 20:27:56.753357 ignition[758]: no config at "/usr/lib/ignition/user.ign" Sep 4 20:27:56.753389 ignition[758]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Sep 4 20:27:56.774932 ignition[758]: GET result: OK Sep 4 20:27:56.775125 ignition[758]: parsing config with SHA512: 1ed633d29ec73cc11f892d1f3f005ab49647ac0e5e653330ae771be0fd73d8e353a855c7f7fc166f4858e6cfe583a5d14d747ea32c27f8f0b56f4495109dd67f Sep 4 20:27:56.781318 unknown[758]: fetched base config from "system" Sep 4 20:27:56.782571 ignition[758]: fetch: fetch complete Sep 4 20:27:56.781342 unknown[758]: fetched base config from "system" Sep 4 20:27:56.782582 ignition[758]: fetch: fetch passed Sep 4 20:27:56.781358 unknown[758]: fetched user config from "digitalocean" Sep 4 20:27:56.782677 ignition[758]: Ignition finished successfully Sep 4 20:27:56.785387 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 4 20:27:56.792030 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 4 20:27:56.828377 ignition[765]: Ignition 2.18.0 Sep 4 20:27:56.828394 ignition[765]: Stage: kargs Sep 4 20:27:56.828807 ignition[765]: no configs at "/usr/lib/ignition/base.d" Sep 4 20:27:56.828824 ignition[765]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 4 20:27:56.832459 ignition[765]: kargs: kargs passed Sep 4 20:27:56.832572 ignition[765]: Ignition finished successfully Sep 4 20:27:56.834451 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 4 20:27:56.840071 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 4 20:27:56.870338 ignition[772]: Ignition 2.18.0 Sep 4 20:27:56.870356 ignition[772]: Stage: disks Sep 4 20:27:56.870661 ignition[772]: no configs at "/usr/lib/ignition/base.d" Sep 4 20:27:56.870682 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 4 20:27:56.872298 ignition[772]: disks: disks passed Sep 4 20:27:56.872388 ignition[772]: Ignition finished successfully Sep 4 20:27:56.874432 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 4 20:27:56.875537 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 4 20:27:56.879261 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 4 20:27:56.880294 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 20:27:56.881305 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 20:27:56.882127 systemd[1]: Reached target basic.target - Basic System. Sep 4 20:27:56.891070 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 4 20:27:56.907169 systemd-fsck[781]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 4 20:27:56.912817 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 4 20:27:56.919271 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 4 20:27:57.040778 kernel: EXT4-fs (vda9): mounted filesystem 84a5cefa-c3c7-47d7-9305-7e6877f73628 r/w with ordered data mode. Quota mode: none. Sep 4 20:27:57.041489 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 4 20:27:57.042575 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 4 20:27:57.051943 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 20:27:57.055150 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 4 20:27:57.056984 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Sep 4 20:27:57.069234 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 4 20:27:57.072065 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (789) Sep 4 20:27:57.070140 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 4 20:27:57.079653 kernel: BTRFS info (device vda6): first mount of filesystem 50e7422b-f0c7-4536-902a-3ab4c864240b Sep 4 20:27:57.079701 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 20:27:57.079725 kernel: BTRFS info (device vda6): using free space tree Sep 4 20:27:57.079772 kernel: BTRFS info (device vda6): auto enabling async discard Sep 4 20:27:57.070193 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 20:27:57.088009 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 20:27:57.089136 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 4 20:27:57.092830 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 4 20:27:57.178579 coreos-metadata[791]: Sep 04 20:27:57.178 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 4 20:27:57.180115 initrd-setup-root[819]: cut: /sysroot/etc/passwd: No such file or directory Sep 4 20:27:57.188270 initrd-setup-root[826]: cut: /sysroot/etc/group: No such file or directory Sep 4 20:27:57.189499 coreos-metadata[792]: Sep 04 20:27:57.189 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 4 20:27:57.193066 coreos-metadata[791]: Sep 04 20:27:57.192 INFO Fetch successful Sep 4 20:27:57.194648 initrd-setup-root[833]: cut: /sysroot/etc/shadow: No such file or directory Sep 4 20:27:57.199551 coreos-metadata[792]: Sep 04 20:27:57.199 INFO Fetch successful Sep 4 20:27:57.203392 initrd-setup-root[840]: cut: /sysroot/etc/gshadow: No such file or directory Sep 4 20:27:57.204427 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Sep 4 20:27:57.205724 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Sep 4 20:27:57.208082 coreos-metadata[792]: Sep 04 20:27:57.207 INFO wrote hostname ci-3975.2.1-0-09c0a9ae8e to /sysroot/etc/hostname Sep 4 20:27:57.208919 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 4 20:27:57.306951 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 4 20:27:57.312977 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 4 20:27:57.315188 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 4 20:27:57.327787 kernel: BTRFS info (device vda6): last unmount of filesystem 50e7422b-f0c7-4536-902a-3ab4c864240b Sep 4 20:27:57.350731 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 4 20:27:57.357447 ignition[910]: INFO : Ignition 2.18.0 Sep 4 20:27:57.357447 ignition[910]: INFO : Stage: mount Sep 4 20:27:57.358631 ignition[910]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 20:27:57.358631 ignition[910]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 4 20:27:57.360583 ignition[910]: INFO : mount: mount passed Sep 4 20:27:57.360583 ignition[910]: INFO : Ignition finished successfully Sep 4 20:27:57.360192 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 4 20:27:57.366954 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 4 20:27:57.470984 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 4 20:27:57.483231 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 20:27:57.493887 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (922) Sep 4 20:27:57.493955 kernel: BTRFS info (device vda6): first mount of filesystem 50e7422b-f0c7-4536-902a-3ab4c864240b Sep 4 20:27:57.494877 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 20:27:57.496134 kernel: BTRFS info (device vda6): using free space tree Sep 4 20:27:57.500882 kernel: BTRFS info (device vda6): auto enabling async discard Sep 4 20:27:57.501287 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 20:27:57.528864 ignition[939]: INFO : Ignition 2.18.0 Sep 4 20:27:57.528864 ignition[939]: INFO : Stage: files Sep 4 20:27:57.530028 ignition[939]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 20:27:57.530028 ignition[939]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 4 20:27:57.531246 ignition[939]: DEBUG : files: compiled without relabeling support, skipping Sep 4 20:27:57.531747 ignition[939]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 4 20:27:57.531747 ignition[939]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 4 20:27:57.535082 ignition[939]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 4 20:27:57.535739 ignition[939]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 4 20:27:57.535739 ignition[939]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 4 20:27:57.535621 unknown[939]: wrote ssh authorized keys file for user: core Sep 4 20:27:57.537639 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 4 20:27:57.538598 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 4 20:27:57.587806 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 4 20:27:57.630401 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 4 20:27:57.630401 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 4 20:27:57.632436 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 4 20:27:57.632436 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 4 20:27:57.632436 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 4 20:27:57.632436 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 20:27:57.632436 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 20:27:57.632436 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 20:27:57.632436 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 20:27:57.632436 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 20:27:57.632436 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 20:27:57.632436 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Sep 4 20:27:57.632436 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Sep 4 20:27:57.632436 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Sep 4 20:27:57.632436 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Sep 4 20:27:58.074296 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 4 20:27:58.097025 systemd-networkd[748]: eth0: Gained IPv6LL Sep 4 20:27:58.481050 systemd-networkd[748]: eth1: Gained IPv6LL Sep 4 20:27:58.592162 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Sep 4 20:27:58.592162 ignition[939]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 4 20:27:58.593935 ignition[939]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 20:27:58.593935 ignition[939]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 20:27:58.593935 ignition[939]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 4 20:27:58.593935 ignition[939]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 4 20:27:58.593935 ignition[939]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 4 20:27:58.599441 ignition[939]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 4 20:27:58.599441 ignition[939]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 4 20:27:58.599441 ignition[939]: INFO : files: files passed Sep 4 20:27:58.599441 ignition[939]: INFO : Ignition finished successfully Sep 4 20:27:58.597084 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 4 20:27:58.611185 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 4 20:27:58.613304 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 4 20:27:58.617304 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 4 20:27:58.618482 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 4 20:27:58.648755 initrd-setup-root-after-ignition[968]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 20:27:58.648755 initrd-setup-root-after-ignition[968]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 4 20:27:58.651726 initrd-setup-root-after-ignition[972]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 20:27:58.651846 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 20:27:58.654396 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 4 20:27:58.668290 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 4 20:27:58.710585 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 4 20:27:58.711583 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 4 20:27:58.714070 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 4 20:27:58.714881 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 4 20:27:58.715924 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 4 20:27:58.727186 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 4 20:27:58.747514 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 20:27:58.756251 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 4 20:27:58.773463 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 4 20:27:58.774090 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 20:27:58.774692 systemd[1]: Stopped target timers.target - Timer Units. Sep 4 20:27:58.775519 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 4 20:27:58.775682 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 20:27:58.776849 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 4 20:27:58.777641 systemd[1]: Stopped target basic.target - Basic System. Sep 4 20:27:58.778362 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 4 20:27:58.779191 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 20:27:58.779856 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 4 20:27:58.780596 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 4 20:27:58.781345 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 20:27:58.782137 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 4 20:27:58.783331 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 4 20:27:58.784376 systemd[1]: Stopped target swap.target - Swaps. Sep 4 20:27:58.785046 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 4 20:27:58.785247 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 4 20:27:58.786155 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 4 20:27:58.786964 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 20:27:58.787636 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 4 20:27:58.787821 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 20:27:58.788506 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 4 20:27:58.788721 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 4 20:27:58.789510 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 4 20:27:58.789673 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 20:27:58.790710 systemd[1]: ignition-files.service: Deactivated successfully. Sep 4 20:27:58.790927 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 4 20:27:58.791502 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 4 20:27:58.791631 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 4 20:27:58.799165 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 4 20:27:58.803143 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 4 20:27:58.804042 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 4 20:27:58.804251 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 20:27:58.804818 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 4 20:27:58.804952 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 20:27:58.815678 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 4 20:27:58.818907 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 4 20:27:58.820908 ignition[992]: INFO : Ignition 2.18.0 Sep 4 20:27:58.820908 ignition[992]: INFO : Stage: umount Sep 4 20:27:58.820908 ignition[992]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 20:27:58.820908 ignition[992]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 4 20:27:58.824777 ignition[992]: INFO : umount: umount passed Sep 4 20:27:58.824777 ignition[992]: INFO : Ignition finished successfully Sep 4 20:27:58.824628 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 4 20:27:58.827127 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 4 20:27:58.829684 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 4 20:27:58.830376 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 4 20:27:58.831365 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 4 20:27:58.832018 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 4 20:27:58.833047 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 4 20:27:58.833482 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 4 20:27:58.837864 systemd[1]: Stopped target network.target - Network. Sep 4 20:27:58.838949 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 4 20:27:58.839632 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 20:27:58.840866 systemd[1]: Stopped target paths.target - Path Units. Sep 4 20:27:58.841556 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 4 20:27:58.844906 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 20:27:58.845543 systemd[1]: Stopped target slices.target - Slice Units. Sep 4 20:27:58.845879 systemd[1]: Stopped target sockets.target - Socket Units. Sep 4 20:27:58.846308 systemd[1]: iscsid.socket: Deactivated successfully. Sep 4 20:27:58.846374 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 20:27:58.848981 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 4 20:27:58.849071 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 20:27:58.864371 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 4 20:27:58.864484 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 4 20:27:58.865272 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 4 20:27:58.865346 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 4 20:27:58.884273 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 4 20:27:58.885110 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 4 20:27:58.886826 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 4 20:27:58.888636 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 4 20:27:58.888981 systemd-networkd[748]: eth0: DHCPv6 lease lost Sep 4 20:27:58.890449 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 4 20:27:58.890991 systemd-networkd[748]: eth1: DHCPv6 lease lost Sep 4 20:27:58.892669 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 4 20:27:58.892824 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 4 20:27:58.896646 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 4 20:27:58.897648 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 4 20:27:58.900984 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 4 20:27:58.901590 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 4 20:27:58.902586 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 4 20:27:58.903079 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 4 20:27:58.909130 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 4 20:27:58.912162 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 4 20:27:58.912304 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 20:27:58.914130 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 20:27:58.914201 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 20:27:58.915148 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 4 20:27:58.915203 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 4 20:27:58.915900 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 4 20:27:58.915942 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Sep 4 20:27:58.919921 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 20:27:58.946117 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 4 20:27:58.946317 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 20:27:58.947631 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 4 20:27:58.947741 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 4 20:27:58.949015 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 4 20:27:58.949059 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 20:27:58.949826 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 4 20:27:58.949878 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 4 20:27:58.951171 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 4 20:27:58.951243 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 4 20:27:58.952217 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 20:27:58.952290 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 20:27:58.963102 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 4 20:27:58.964378 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 4 20:27:58.964501 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 20:27:58.965116 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 4 20:27:58.965197 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 20:27:58.965744 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 4 20:27:58.967975 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 20:27:58.968603 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 20:27:58.968677 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 20:27:58.971917 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 4 20:27:58.972104 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 4 20:27:58.974537 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 4 20:27:58.974707 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 4 20:27:58.977444 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 4 20:27:58.989235 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 4 20:27:59.001366 systemd[1]: Switching root. Sep 4 20:27:59.045545 systemd-journald[182]: Journal stopped Sep 4 20:28:00.237322 systemd-journald[182]: Received SIGTERM from PID 1 (systemd). Sep 4 20:28:00.237419 kernel: SELinux: policy capability network_peer_controls=1 Sep 4 20:28:00.237450 kernel: SELinux: policy capability open_perms=1 Sep 4 20:28:00.237474 kernel: SELinux: policy capability extended_socket_class=1 Sep 4 20:28:00.237487 kernel: SELinux: policy capability always_check_network=0 Sep 4 20:28:00.237499 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 4 20:28:00.237518 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 4 20:28:00.237535 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 4 20:28:00.237552 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 4 20:28:00.237564 kernel: audit: type=1403 audit(1725481679.177:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 4 20:28:00.237583 systemd[1]: Successfully loaded SELinux policy in 42.391ms. Sep 4 20:28:00.237604 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 17.782ms. Sep 4 20:28:00.237620 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 4 20:28:00.237634 systemd[1]: Detected virtualization kvm. Sep 4 20:28:00.237649 systemd[1]: Detected architecture x86-64. Sep 4 20:28:00.237665 systemd[1]: Detected first boot. Sep 4 20:28:00.237678 systemd[1]: Hostname set to . Sep 4 20:28:00.237691 systemd[1]: Initializing machine ID from VM UUID. Sep 4 20:28:00.237703 zram_generator::config[1034]: No configuration found. Sep 4 20:28:00.237719 systemd[1]: Populated /etc with preset unit settings. Sep 4 20:28:00.237732 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 4 20:28:00.237744 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 4 20:28:00.238839 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 4 20:28:00.238871 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 4 20:28:00.238898 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 4 20:28:00.238918 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 4 20:28:00.238939 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 4 20:28:00.238961 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 4 20:28:00.238984 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 4 20:28:00.239004 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 4 20:28:00.239028 systemd[1]: Created slice user.slice - User and Session Slice. Sep 4 20:28:00.239052 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 20:28:00.239079 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 20:28:00.239101 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 4 20:28:00.239122 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 4 20:28:00.239144 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 4 20:28:00.239165 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 20:28:00.239186 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 4 20:28:00.239208 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 20:28:00.239231 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 4 20:28:00.239266 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 4 20:28:00.239286 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 4 20:28:00.239305 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 4 20:28:00.239325 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 20:28:00.239346 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 20:28:00.239365 systemd[1]: Reached target slices.target - Slice Units. Sep 4 20:28:00.239389 systemd[1]: Reached target swap.target - Swaps. Sep 4 20:28:00.239412 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 4 20:28:00.239441 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 4 20:28:00.239461 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 20:28:00.239474 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 20:28:00.239488 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 20:28:00.239502 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 4 20:28:00.239514 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 4 20:28:00.239532 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 4 20:28:00.239545 systemd[1]: Mounting media.mount - External Media Directory... Sep 4 20:28:00.239558 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 20:28:00.239576 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 4 20:28:00.239588 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 4 20:28:00.239602 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 4 20:28:00.239616 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 4 20:28:00.239629 systemd[1]: Reached target machines.target - Containers. Sep 4 20:28:00.239643 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 4 20:28:00.239656 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 20:28:00.239668 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 20:28:00.239685 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 4 20:28:00.239699 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 20:28:00.239711 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 20:28:00.239724 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 20:28:00.239737 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 4 20:28:00.239803 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 20:28:00.239827 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 4 20:28:00.239850 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 4 20:28:00.241848 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 4 20:28:00.241875 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 4 20:28:00.241889 systemd[1]: Stopped systemd-fsck-usr.service. Sep 4 20:28:00.241902 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 20:28:00.241915 kernel: loop: module loaded Sep 4 20:28:00.241933 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 20:28:00.241946 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 20:28:00.241962 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 4 20:28:00.241975 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 20:28:00.241996 systemd[1]: verity-setup.service: Deactivated successfully. Sep 4 20:28:00.242015 systemd[1]: Stopped verity-setup.service. Sep 4 20:28:00.242029 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 20:28:00.242045 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 4 20:28:00.242058 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 4 20:28:00.242071 systemd[1]: Mounted media.mount - External Media Directory. Sep 4 20:28:00.242084 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 4 20:28:00.242101 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 4 20:28:00.242115 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 4 20:28:00.242177 systemd-journald[1106]: Collecting audit messages is disabled. Sep 4 20:28:00.242212 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 20:28:00.242231 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 4 20:28:00.242245 systemd-journald[1106]: Journal started Sep 4 20:28:00.242280 systemd-journald[1106]: Runtime Journal (/run/log/journal/d5eae7c9d6714d9fa77148bc8a835cee) is 4.9M, max 39.3M, 34.4M free. Sep 4 20:27:59.886045 systemd[1]: Queued start job for default target multi-user.target. Sep 4 20:27:59.906885 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 4 20:27:59.907577 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 4 20:28:00.245810 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 4 20:28:00.249786 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 20:28:00.259770 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 20:28:00.260286 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 20:28:00.262665 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 20:28:00.263650 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 20:28:00.266125 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 20:28:00.266389 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 20:28:00.270272 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 20:28:00.271637 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 20:28:00.308787 kernel: ACPI: bus type drm_connector registered Sep 4 20:28:00.310129 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 20:28:00.310383 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 20:28:00.322433 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 20:28:00.329797 kernel: fuse: init (API version 7.39) Sep 4 20:28:00.330962 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 4 20:28:00.331618 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 20:28:00.346229 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 20:28:00.362133 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 20:28:00.367465 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 4 20:28:00.372257 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 4 20:28:00.372549 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 4 20:28:00.380349 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 4 20:28:00.381873 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 4 20:28:00.396999 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 4 20:28:00.397807 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 4 20:28:00.397896 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 20:28:00.404296 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 4 20:28:00.418066 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 4 20:28:00.429269 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 4 20:28:00.430178 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 20:28:00.442187 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 4 20:28:00.452169 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 4 20:28:00.453970 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 20:28:00.466083 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 4 20:28:00.470989 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 4 20:28:00.475723 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 20:28:00.483230 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 4 20:28:00.493474 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 4 20:28:00.528987 systemd-journald[1106]: Time spent on flushing to /var/log/journal/d5eae7c9d6714d9fa77148bc8a835cee is 161.793ms for 987 entries. Sep 4 20:28:00.528987 systemd-journald[1106]: System Journal (/var/log/journal/d5eae7c9d6714d9fa77148bc8a835cee) is 8.0M, max 195.6M, 187.6M free. Sep 4 20:28:00.747869 systemd-journald[1106]: Received client request to flush runtime journal. Sep 4 20:28:00.748053 kernel: loop0: detected capacity change from 0 to 8 Sep 4 20:28:00.748089 kernel: block loop0: the capability attribute has been deprecated. Sep 4 20:28:00.754281 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 4 20:28:00.754428 kernel: loop1: detected capacity change from 0 to 139904 Sep 4 20:28:00.754448 kernel: loop2: detected capacity change from 0 to 80568 Sep 4 20:28:00.540933 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 4 20:28:00.541673 systemd-tmpfiles[1145]: ACLs are not supported, ignoring. Sep 4 20:28:00.541698 systemd-tmpfiles[1145]: ACLs are not supported, ignoring. Sep 4 20:28:00.547001 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 4 20:28:00.565143 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 4 20:28:00.572872 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 20:28:00.589169 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 4 20:28:00.647288 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 20:28:00.660270 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 4 20:28:00.670231 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 4 20:28:00.675127 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 4 20:28:00.760002 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 4 20:28:00.764919 udevadm[1169]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 4 20:28:00.791937 kernel: loop3: detected capacity change from 0 to 210664 Sep 4 20:28:00.803208 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 4 20:28:00.822274 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 20:28:00.867567 kernel: loop4: detected capacity change from 0 to 8 Sep 4 20:28:00.883424 kernel: loop5: detected capacity change from 0 to 139904 Sep 4 20:28:00.914788 kernel: loop6: detected capacity change from 0 to 80568 Sep 4 20:28:00.918145 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. Sep 4 20:28:00.918624 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. Sep 4 20:28:00.927452 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 20:28:00.939160 kernel: loop7: detected capacity change from 0 to 210664 Sep 4 20:28:00.968209 (sd-merge)[1179]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Sep 4 20:28:00.972851 (sd-merge)[1179]: Merged extensions into '/usr'. Sep 4 20:28:00.983128 systemd[1]: Reloading requested from client PID 1158 ('systemd-sysext') (unit systemd-sysext.service)... Sep 4 20:28:00.984291 systemd[1]: Reloading... Sep 4 20:28:01.099951 zram_generator::config[1203]: No configuration found. Sep 4 20:28:01.378438 ldconfig[1153]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 4 20:28:01.494984 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 20:28:01.563661 systemd[1]: Reloading finished in 577 ms. Sep 4 20:28:01.589288 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 4 20:28:01.593199 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 4 20:28:01.607295 systemd[1]: Starting ensure-sysext.service... Sep 4 20:28:01.613495 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Sep 4 20:28:01.624652 systemd[1]: Reloading requested from client PID 1248 ('systemctl') (unit ensure-sysext.service)... Sep 4 20:28:01.624676 systemd[1]: Reloading... Sep 4 20:28:01.705114 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 4 20:28:01.705714 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 4 20:28:01.708289 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 4 20:28:01.711952 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. Sep 4 20:28:01.712096 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. Sep 4 20:28:01.725518 systemd-tmpfiles[1249]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 20:28:01.725537 systemd-tmpfiles[1249]: Skipping /boot Sep 4 20:28:01.757910 systemd-tmpfiles[1249]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 20:28:01.757930 systemd-tmpfiles[1249]: Skipping /boot Sep 4 20:28:01.805785 zram_generator::config[1276]: No configuration found. Sep 4 20:28:01.985880 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 20:28:02.051400 systemd[1]: Reloading finished in 426 ms. Sep 4 20:28:02.073255 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 4 20:28:02.079614 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Sep 4 20:28:02.104121 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 4 20:28:02.110129 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 4 20:28:02.114225 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 4 20:28:02.126092 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 20:28:02.129413 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 20:28:02.140126 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 4 20:28:02.158198 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 4 20:28:02.162381 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 20:28:02.162903 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 20:28:02.167243 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 20:28:02.176253 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 20:28:02.188213 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 20:28:02.189082 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 20:28:02.189269 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 20:28:02.192578 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 20:28:02.195175 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 20:28:02.195412 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 20:28:02.195504 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 20:28:02.202148 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 20:28:02.202500 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 20:28:02.208026 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 20:28:02.208667 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 20:28:02.208940 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 20:28:02.209851 systemd[1]: Finished ensure-sysext.service. Sep 4 20:28:02.213439 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 4 20:28:02.232110 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 4 20:28:02.233248 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 20:28:02.233485 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 20:28:02.250414 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 20:28:02.250656 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 20:28:02.265273 augenrules[1349]: No rules Sep 4 20:28:02.270923 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 4 20:28:02.274914 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 4 20:28:02.287227 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 4 20:28:02.297141 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 20:28:02.297468 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 20:28:02.299740 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 20:28:02.306207 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 20:28:02.306887 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 20:28:02.309699 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 20:28:02.333001 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 4 20:28:02.334960 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 4 20:28:02.340593 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 4 20:28:02.364340 systemd-udevd[1327]: Using default interface naming scheme 'v255'. Sep 4 20:28:02.385918 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 4 20:28:02.412944 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 4 20:28:02.414668 systemd[1]: Reached target time-set.target - System Time Set. Sep 4 20:28:02.423421 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 20:28:02.433141 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 20:28:02.528622 systemd-resolved[1323]: Positive Trust Anchors: Sep 4 20:28:02.528651 systemd-resolved[1323]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 20:28:02.528715 systemd-resolved[1323]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Sep 4 20:28:02.537045 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Sep 4 20:28:02.537502 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 20:28:02.537669 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 20:28:02.540687 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 20:28:02.545340 systemd-resolved[1323]: Using system hostname 'ci-3975.2.1-0-09c0a9ae8e'. Sep 4 20:28:02.551103 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 20:28:02.559162 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 20:28:02.561110 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 20:28:02.561189 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 4 20:28:02.561217 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 20:28:02.561603 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 20:28:02.562590 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 20:28:02.579810 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1378) Sep 4 20:28:02.627042 kernel: ISO 9660 Extensions: RRIP_1991A Sep 4 20:28:02.633696 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Sep 4 20:28:02.641833 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1379) Sep 4 20:28:02.647688 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 4 20:28:02.672429 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 20:28:02.672698 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 20:28:02.674198 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 20:28:02.679534 systemd-networkd[1369]: lo: Link UP Sep 4 20:28:02.681361 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 20:28:02.681415 systemd-networkd[1369]: lo: Gained carrier Sep 4 20:28:02.682180 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 20:28:02.684154 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 20:28:02.684321 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 20:28:02.690120 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 20:28:02.691132 systemd-networkd[1369]: Enumeration completed Sep 4 20:28:02.691527 systemd-networkd[1369]: eth0: Configuring with /run/systemd/network/10-22:04:fb:80:d0:0e.network. Sep 4 20:28:02.691912 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 20:28:02.692614 systemd[1]: Reached target network.target - Network. Sep 4 20:28:02.693612 systemd-networkd[1369]: eth1: Configuring with /run/systemd/network/10-de:c2:f2:8b:2a:fc.network. Sep 4 20:28:02.695110 systemd-networkd[1369]: eth0: Link UP Sep 4 20:28:02.695118 systemd-networkd[1369]: eth0: Gained carrier Sep 4 20:28:02.700306 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 4 20:28:02.701356 systemd-networkd[1369]: eth1: Link UP Sep 4 20:28:02.701365 systemd-networkd[1369]: eth1: Gained carrier Sep 4 20:28:02.708499 systemd-timesyncd[1343]: Network configuration changed, trying to establish connection. Sep 4 20:28:02.708919 systemd-timesyncd[1343]: Network configuration changed, trying to establish connection. Sep 4 20:28:02.808269 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 4 20:28:02.826805 kernel: ACPI: button: Power Button [PWRF] Sep 4 20:28:02.852798 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Sep 4 20:28:02.869433 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 4 20:28:02.881304 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 4 20:28:02.882168 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 4 20:28:02.932536 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 4 20:28:02.963426 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 20:28:02.970798 kernel: mousedev: PS/2 mouse device common for all mice Sep 4 20:28:03.042041 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Sep 4 20:28:03.043786 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Sep 4 20:28:03.060081 kernel: Console: switching to colour dummy device 80x25 Sep 4 20:28:03.060299 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Sep 4 20:28:03.060327 kernel: [drm] features: -context_init Sep 4 20:28:03.060348 kernel: [drm] number of scanouts: 1 Sep 4 20:28:03.060368 kernel: [drm] number of cap sets: 0 Sep 4 20:28:03.060387 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Sep 4 20:28:03.076788 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Sep 4 20:28:03.080318 kernel: Console: switching to colour frame buffer device 128x48 Sep 4 20:28:03.095833 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Sep 4 20:28:03.101319 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 20:28:03.101605 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 20:28:03.163674 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 20:28:03.180355 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 20:28:03.181983 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 20:28:03.197289 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 20:28:03.246804 kernel: EDAC MC: Ver: 3.0.0 Sep 4 20:28:03.272701 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 20:28:03.283978 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 4 20:28:03.289238 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 4 20:28:03.309843 lvm[1429]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 20:28:03.343590 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 4 20:28:03.346332 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 20:28:03.348712 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 20:28:03.349387 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 4 20:28:03.349592 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 4 20:28:03.350019 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 4 20:28:03.350208 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 4 20:28:03.350296 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 4 20:28:03.350375 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 4 20:28:03.350409 systemd[1]: Reached target paths.target - Path Units. Sep 4 20:28:03.350465 systemd[1]: Reached target timers.target - Timer Units. Sep 4 20:28:03.353902 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 4 20:28:03.357059 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 4 20:28:03.366284 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 4 20:28:03.375197 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 4 20:28:03.378430 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 4 20:28:03.380607 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 20:28:03.382246 systemd[1]: Reached target basic.target - Basic System. Sep 4 20:28:03.383799 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 4 20:28:03.383860 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 4 20:28:03.386958 systemd[1]: Starting containerd.service - containerd container runtime... Sep 4 20:28:03.395128 lvm[1433]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 20:28:03.406232 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 4 20:28:03.413014 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 4 20:28:03.419023 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 4 20:28:03.429161 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 4 20:28:03.433948 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 4 20:28:03.441465 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 4 20:28:03.453671 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 4 20:28:03.456966 jq[1437]: false Sep 4 20:28:03.463145 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 4 20:28:03.469041 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 4 20:28:03.483043 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 4 20:28:03.488238 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 4 20:28:03.489090 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 4 20:28:03.496210 systemd[1]: Starting update-engine.service - Update Engine... Sep 4 20:28:03.506369 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 4 20:28:03.511271 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 4 20:28:03.517567 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 4 20:28:03.518907 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 4 20:28:03.559552 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 4 20:28:03.560859 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 4 20:28:03.574868 extend-filesystems[1440]: Found loop4 Sep 4 20:28:03.574868 extend-filesystems[1440]: Found loop5 Sep 4 20:28:03.574868 extend-filesystems[1440]: Found loop6 Sep 4 20:28:03.574868 extend-filesystems[1440]: Found loop7 Sep 4 20:28:03.574868 extend-filesystems[1440]: Found vda Sep 4 20:28:03.574868 extend-filesystems[1440]: Found vda1 Sep 4 20:28:03.574868 extend-filesystems[1440]: Found vda2 Sep 4 20:28:03.574868 extend-filesystems[1440]: Found vda3 Sep 4 20:28:03.574868 extend-filesystems[1440]: Found usr Sep 4 20:28:03.574868 extend-filesystems[1440]: Found vda4 Sep 4 20:28:03.574868 extend-filesystems[1440]: Found vda6 Sep 4 20:28:03.574868 extend-filesystems[1440]: Found vda7 Sep 4 20:28:03.574868 extend-filesystems[1440]: Found vda9 Sep 4 20:28:03.574868 extend-filesystems[1440]: Checking size of /dev/vda9 Sep 4 20:28:03.657671 coreos-metadata[1435]: Sep 04 20:28:03.612 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 4 20:28:03.657671 coreos-metadata[1435]: Sep 04 20:28:03.643 INFO Fetch successful Sep 4 20:28:03.663864 update_engine[1447]: I0904 20:28:03.640728 1447 main.cc:92] Flatcar Update Engine starting Sep 4 20:28:03.636270 dbus-daemon[1436]: [system] SELinux support is enabled Sep 4 20:28:03.664683 tar[1457]: linux-amd64/helm Sep 4 20:28:03.595274 systemd[1]: motdgen.service: Deactivated successfully. Sep 4 20:28:03.674385 update_engine[1447]: I0904 20:28:03.673686 1447 update_check_scheduler.cc:74] Next update check in 6m17s Sep 4 20:28:03.595880 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 4 20:28:03.674538 jq[1449]: true Sep 4 20:28:03.636643 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 4 20:28:03.655897 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 4 20:28:03.655957 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 4 20:28:03.659724 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 4 20:28:03.661109 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Sep 4 20:28:03.661189 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 4 20:28:03.686966 (ntainerd)[1468]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 4 20:28:03.691378 systemd[1]: Started update-engine.service - Update Engine. Sep 4 20:28:03.708049 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 4 20:28:03.724016 extend-filesystems[1440]: Resized partition /dev/vda9 Sep 4 20:28:03.741790 jq[1469]: true Sep 4 20:28:03.742141 extend-filesystems[1478]: resize2fs 1.47.0 (5-Feb-2023) Sep 4 20:28:03.762795 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Sep 4 20:28:03.815430 systemd-logind[1446]: New seat seat0. Sep 4 20:28:03.826662 systemd-logind[1446]: Watching system buttons on /dev/input/event1 (Power Button) Sep 4 20:28:03.826705 systemd-logind[1446]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 4 20:28:03.827086 systemd[1]: Started systemd-logind.service - User Login Management. Sep 4 20:28:03.857535 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 4 20:28:03.857829 systemd-networkd[1369]: eth1: Gained IPv6LL Sep 4 20:28:03.858960 systemd-timesyncd[1343]: Network configuration changed, trying to establish connection. Sep 4 20:28:03.860879 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 4 20:28:03.871871 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 4 20:28:03.878709 systemd[1]: Reached target network-online.target - Network is Online. Sep 4 20:28:03.893773 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 20:28:03.904137 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 4 20:28:03.916937 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1375) Sep 4 20:28:03.997440 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Sep 4 20:28:04.046645 extend-filesystems[1478]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 4 20:28:04.046645 extend-filesystems[1478]: old_desc_blocks = 1, new_desc_blocks = 8 Sep 4 20:28:04.046645 extend-filesystems[1478]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Sep 4 20:28:04.049338 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 4 20:28:04.065632 extend-filesystems[1440]: Resized filesystem in /dev/vda9 Sep 4 20:28:04.065632 extend-filesystems[1440]: Found vdb Sep 4 20:28:04.049673 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 4 20:28:04.083024 bash[1504]: Updated "/home/core/.ssh/authorized_keys" Sep 4 20:28:04.076475 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 4 20:28:04.094215 systemd[1]: Starting sshkeys.service... Sep 4 20:28:04.149639 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 4 20:28:04.160022 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 4 20:28:04.176358 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 4 20:28:04.191431 locksmithd[1475]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 4 20:28:04.242672 systemd-networkd[1369]: eth0: Gained IPv6LL Sep 4 20:28:04.245182 systemd-timesyncd[1343]: Network configuration changed, trying to establish connection. Sep 4 20:28:04.347361 coreos-metadata[1521]: Sep 04 20:28:04.346 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 4 20:28:04.359971 sshd_keygen[1470]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 4 20:28:04.367577 coreos-metadata[1521]: Sep 04 20:28:04.367 INFO Fetch successful Sep 4 20:28:04.392838 unknown[1521]: wrote ssh authorized keys file for user: core Sep 4 20:28:04.452568 update-ssh-keys[1533]: Updated "/home/core/.ssh/authorized_keys" Sep 4 20:28:04.453453 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 4 20:28:04.468742 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 4 20:28:04.472105 systemd[1]: Finished sshkeys.service. Sep 4 20:28:04.491315 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 4 20:28:04.540865 systemd[1]: issuegen.service: Deactivated successfully. Sep 4 20:28:04.542001 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 4 20:28:04.553810 containerd[1468]: time="2024-09-04T20:28:04.552665926Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Sep 4 20:28:04.562272 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 4 20:28:04.625742 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 4 20:28:04.637818 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 4 20:28:04.653385 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 4 20:28:04.657448 systemd[1]: Reached target getty.target - Login Prompts. Sep 4 20:28:04.669441 containerd[1468]: time="2024-09-04T20:28:04.668478406Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 4 20:28:04.669441 containerd[1468]: time="2024-09-04T20:28:04.668553407Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 4 20:28:04.676182 containerd[1468]: time="2024-09-04T20:28:04.676110059Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.48-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 4 20:28:04.677850 containerd[1468]: time="2024-09-04T20:28:04.676348891Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 4 20:28:04.679790 containerd[1468]: time="2024-09-04T20:28:04.678292727Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 20:28:04.679790 containerd[1468]: time="2024-09-04T20:28:04.678369208Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 4 20:28:04.679790 containerd[1468]: time="2024-09-04T20:28:04.678561834Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 4 20:28:04.679790 containerd[1468]: time="2024-09-04T20:28:04.678663840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 20:28:04.679790 containerd[1468]: time="2024-09-04T20:28:04.678691058Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 4 20:28:04.679790 containerd[1468]: time="2024-09-04T20:28:04.678883054Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 4 20:28:04.679790 containerd[1468]: time="2024-09-04T20:28:04.679238206Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 4 20:28:04.679790 containerd[1468]: time="2024-09-04T20:28:04.679269782Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 4 20:28:04.679790 containerd[1468]: time="2024-09-04T20:28:04.679287845Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 4 20:28:04.679790 containerd[1468]: time="2024-09-04T20:28:04.679519513Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 20:28:04.679790 containerd[1468]: time="2024-09-04T20:28:04.679544757Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 4 20:28:04.680372 containerd[1468]: time="2024-09-04T20:28:04.679647417Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 4 20:28:04.680372 containerd[1468]: time="2024-09-04T20:28:04.679671458Z" level=info msg="metadata content store policy set" policy=shared Sep 4 20:28:04.695489 containerd[1468]: time="2024-09-04T20:28:04.694431887Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 4 20:28:04.695489 containerd[1468]: time="2024-09-04T20:28:04.694499241Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 4 20:28:04.695489 containerd[1468]: time="2024-09-04T20:28:04.694514956Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 4 20:28:04.695489 containerd[1468]: time="2024-09-04T20:28:04.694571334Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 4 20:28:04.695489 containerd[1468]: time="2024-09-04T20:28:04.694587272Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 4 20:28:04.695489 containerd[1468]: time="2024-09-04T20:28:04.694602705Z" level=info msg="NRI interface is disabled by configuration." Sep 4 20:28:04.695489 containerd[1468]: time="2024-09-04T20:28:04.694647989Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 4 20:28:04.695489 containerd[1468]: time="2024-09-04T20:28:04.695060343Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 4 20:28:04.695489 containerd[1468]: time="2024-09-04T20:28:04.695096862Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 4 20:28:04.695489 containerd[1468]: time="2024-09-04T20:28:04.695120936Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 4 20:28:04.695489 containerd[1468]: time="2024-09-04T20:28:04.695143174Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 4 20:28:04.695489 containerd[1468]: time="2024-09-04T20:28:04.695169285Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 4 20:28:04.695489 containerd[1468]: time="2024-09-04T20:28:04.695202885Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 4 20:28:04.695489 containerd[1468]: time="2024-09-04T20:28:04.695228628Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 4 20:28:04.696340 containerd[1468]: time="2024-09-04T20:28:04.695249934Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 4 20:28:04.696340 containerd[1468]: time="2024-09-04T20:28:04.695275197Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 4 20:28:04.696340 containerd[1468]: time="2024-09-04T20:28:04.695294704Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 4 20:28:04.696340 containerd[1468]: time="2024-09-04T20:28:04.695316663Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 4 20:28:04.696340 containerd[1468]: time="2024-09-04T20:28:04.695329993Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 4 20:28:04.696340 containerd[1468]: time="2024-09-04T20:28:04.695513123Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 4 20:28:04.696340 containerd[1468]: time="2024-09-04T20:28:04.695887921Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 4 20:28:04.696340 containerd[1468]: time="2024-09-04T20:28:04.695926159Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 4 20:28:04.696340 containerd[1468]: time="2024-09-04T20:28:04.695942112Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 4 20:28:04.696340 containerd[1468]: time="2024-09-04T20:28:04.695972936Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 4 20:28:04.696340 containerd[1468]: time="2024-09-04T20:28:04.696065537Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 4 20:28:04.696340 containerd[1468]: time="2024-09-04T20:28:04.696080889Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 4 20:28:04.696340 containerd[1468]: time="2024-09-04T20:28:04.696094880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 4 20:28:04.696340 containerd[1468]: time="2024-09-04T20:28:04.696107418Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 4 20:28:04.696872 containerd[1468]: time="2024-09-04T20:28:04.696122096Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 4 20:28:04.696872 containerd[1468]: time="2024-09-04T20:28:04.696135630Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 4 20:28:04.696872 containerd[1468]: time="2024-09-04T20:28:04.696151108Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 4 20:28:04.696872 containerd[1468]: time="2024-09-04T20:28:04.696170485Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 4 20:28:04.696872 containerd[1468]: time="2024-09-04T20:28:04.696186279Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 4 20:28:04.696872 containerd[1468]: time="2024-09-04T20:28:04.696349986Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 4 20:28:04.696872 containerd[1468]: time="2024-09-04T20:28:04.696369083Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 4 20:28:04.696872 containerd[1468]: time="2024-09-04T20:28:04.696386017Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 4 20:28:04.696872 containerd[1468]: time="2024-09-04T20:28:04.696402058Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 4 20:28:04.696872 containerd[1468]: time="2024-09-04T20:28:04.696416065Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 4 20:28:04.696872 containerd[1468]: time="2024-09-04T20:28:04.696430798Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 4 20:28:04.696872 containerd[1468]: time="2024-09-04T20:28:04.696446386Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 4 20:28:04.696872 containerd[1468]: time="2024-09-04T20:28:04.696458348Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 4 20:28:04.697366 containerd[1468]: time="2024-09-04T20:28:04.696961227Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 4 20:28:04.697366 containerd[1468]: time="2024-09-04T20:28:04.697044547Z" level=info msg="Connect containerd service" Sep 4 20:28:04.697366 containerd[1468]: time="2024-09-04T20:28:04.697085595Z" level=info msg="using legacy CRI server" Sep 4 20:28:04.697366 containerd[1468]: time="2024-09-04T20:28:04.697093568Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 4 20:28:04.697366 containerd[1468]: time="2024-09-04T20:28:04.697200470Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 4 20:28:04.703116 containerd[1468]: time="2024-09-04T20:28:04.698117665Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 20:28:04.703116 containerd[1468]: time="2024-09-04T20:28:04.698189027Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 4 20:28:04.703116 containerd[1468]: time="2024-09-04T20:28:04.698314555Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 4 20:28:04.703116 containerd[1468]: time="2024-09-04T20:28:04.698333246Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 4 20:28:04.703116 containerd[1468]: time="2024-09-04T20:28:04.698249819Z" level=info msg="Start subscribing containerd event" Sep 4 20:28:04.703116 containerd[1468]: time="2024-09-04T20:28:04.698420066Z" level=info msg="Start recovering state" Sep 4 20:28:04.703116 containerd[1468]: time="2024-09-04T20:28:04.698500357Z" level=info msg="Start event monitor" Sep 4 20:28:04.703116 containerd[1468]: time="2024-09-04T20:28:04.698526318Z" level=info msg="Start snapshots syncer" Sep 4 20:28:04.703116 containerd[1468]: time="2024-09-04T20:28:04.698541802Z" level=info msg="Start cni network conf syncer for default" Sep 4 20:28:04.703116 containerd[1468]: time="2024-09-04T20:28:04.698553863Z" level=info msg="Start streaming server" Sep 4 20:28:04.703116 containerd[1468]: time="2024-09-04T20:28:04.699042937Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 4 20:28:04.703116 containerd[1468]: time="2024-09-04T20:28:04.699346916Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 4 20:28:04.703116 containerd[1468]: time="2024-09-04T20:28:04.699594404Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 4 20:28:04.703116 containerd[1468]: time="2024-09-04T20:28:04.701527729Z" level=info msg="containerd successfully booted in 0.155254s" Sep 4 20:28:04.699786 systemd[1]: Started containerd.service - containerd container runtime. Sep 4 20:28:05.109912 tar[1457]: linux-amd64/LICENSE Sep 4 20:28:05.109912 tar[1457]: linux-amd64/README.md Sep 4 20:28:05.124987 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 4 20:28:05.535127 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 20:28:05.539479 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 4 20:28:05.540536 (kubelet)[1558]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 20:28:05.541267 systemd[1]: Startup finished in 1.019s (kernel) + 5.465s (initrd) + 6.405s (userspace) = 12.889s. Sep 4 20:28:06.349528 kubelet[1558]: E0904 20:28:06.349388 1558 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 20:28:06.353430 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 20:28:06.353603 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 20:28:06.354073 systemd[1]: kubelet.service: Consumed 1.366s CPU time. Sep 4 20:28:07.428358 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 4 20:28:07.438409 systemd[1]: Started sshd@0-64.23.130.28:22-139.178.68.195:56930.service - OpenSSH per-connection server daemon (139.178.68.195:56930). Sep 4 20:28:07.512469 sshd[1571]: Accepted publickey for core from 139.178.68.195 port 56930 ssh2: RSA SHA256:6m86ErQYPfwi49NZRVftW/USO9k3FxgPtHd71f+HMpY Sep 4 20:28:07.516382 sshd[1571]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 20:28:07.531559 systemd-logind[1446]: New session 1 of user core. Sep 4 20:28:07.533100 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 4 20:28:07.554358 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 4 20:28:07.577154 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 4 20:28:07.584292 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 4 20:28:07.600746 (systemd)[1575]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 4 20:28:07.753218 systemd[1575]: Queued start job for default target default.target. Sep 4 20:28:07.764311 systemd[1575]: Created slice app.slice - User Application Slice. Sep 4 20:28:07.764354 systemd[1575]: Reached target paths.target - Paths. Sep 4 20:28:07.764370 systemd[1575]: Reached target timers.target - Timers. Sep 4 20:28:07.766327 systemd[1575]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 4 20:28:07.796974 systemd[1575]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 4 20:28:07.797232 systemd[1575]: Reached target sockets.target - Sockets. Sep 4 20:28:07.797251 systemd[1575]: Reached target basic.target - Basic System. Sep 4 20:28:07.797310 systemd[1575]: Reached target default.target - Main User Target. Sep 4 20:28:07.797350 systemd[1575]: Startup finished in 182ms. Sep 4 20:28:07.797890 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 4 20:28:07.810184 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 4 20:28:07.890302 systemd[1]: Started sshd@1-64.23.130.28:22-139.178.68.195:56932.service - OpenSSH per-connection server daemon (139.178.68.195:56932). Sep 4 20:28:07.943566 sshd[1586]: Accepted publickey for core from 139.178.68.195 port 56932 ssh2: RSA SHA256:6m86ErQYPfwi49NZRVftW/USO9k3FxgPtHd71f+HMpY Sep 4 20:28:07.946234 sshd[1586]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 20:28:07.952612 systemd-logind[1446]: New session 2 of user core. Sep 4 20:28:07.963548 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 4 20:28:08.039699 sshd[1586]: pam_unix(sshd:session): session closed for user core Sep 4 20:28:08.048991 systemd[1]: sshd@1-64.23.130.28:22-139.178.68.195:56932.service: Deactivated successfully. Sep 4 20:28:08.051371 systemd[1]: session-2.scope: Deactivated successfully. Sep 4 20:28:08.054226 systemd-logind[1446]: Session 2 logged out. Waiting for processes to exit. Sep 4 20:28:08.063662 systemd[1]: Started sshd@2-64.23.130.28:22-139.178.68.195:56934.service - OpenSSH per-connection server daemon (139.178.68.195:56934). Sep 4 20:28:08.068157 systemd-logind[1446]: Removed session 2. Sep 4 20:28:08.113583 sshd[1593]: Accepted publickey for core from 139.178.68.195 port 56934 ssh2: RSA SHA256:6m86ErQYPfwi49NZRVftW/USO9k3FxgPtHd71f+HMpY Sep 4 20:28:08.115620 sshd[1593]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 20:28:08.123344 systemd-logind[1446]: New session 3 of user core. Sep 4 20:28:08.133172 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 4 20:28:08.195733 sshd[1593]: pam_unix(sshd:session): session closed for user core Sep 4 20:28:08.207115 systemd[1]: sshd@2-64.23.130.28:22-139.178.68.195:56934.service: Deactivated successfully. Sep 4 20:28:08.210250 systemd[1]: session-3.scope: Deactivated successfully. Sep 4 20:28:08.213005 systemd-logind[1446]: Session 3 logged out. Waiting for processes to exit. Sep 4 20:28:08.224423 systemd[1]: Started sshd@3-64.23.130.28:22-139.178.68.195:56942.service - OpenSSH per-connection server daemon (139.178.68.195:56942). Sep 4 20:28:08.225622 systemd-logind[1446]: Removed session 3. Sep 4 20:28:08.267979 sshd[1600]: Accepted publickey for core from 139.178.68.195 port 56942 ssh2: RSA SHA256:6m86ErQYPfwi49NZRVftW/USO9k3FxgPtHd71f+HMpY Sep 4 20:28:08.270248 sshd[1600]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 20:28:08.276934 systemd-logind[1446]: New session 4 of user core. Sep 4 20:28:08.288186 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 4 20:28:08.356185 sshd[1600]: pam_unix(sshd:session): session closed for user core Sep 4 20:28:08.368602 systemd[1]: sshd@3-64.23.130.28:22-139.178.68.195:56942.service: Deactivated successfully. Sep 4 20:28:08.371689 systemd[1]: session-4.scope: Deactivated successfully. Sep 4 20:28:08.376052 systemd-logind[1446]: Session 4 logged out. Waiting for processes to exit. Sep 4 20:28:08.381310 systemd[1]: Started sshd@4-64.23.130.28:22-139.178.68.195:56958.service - OpenSSH per-connection server daemon (139.178.68.195:56958). Sep 4 20:28:08.384702 systemd-logind[1446]: Removed session 4. Sep 4 20:28:08.447032 sshd[1607]: Accepted publickey for core from 139.178.68.195 port 56958 ssh2: RSA SHA256:6m86ErQYPfwi49NZRVftW/USO9k3FxgPtHd71f+HMpY Sep 4 20:28:08.449127 sshd[1607]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 20:28:08.457917 systemd-logind[1446]: New session 5 of user core. Sep 4 20:28:08.463219 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 4 20:28:08.541241 sudo[1610]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 4 20:28:08.541681 sudo[1610]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 4 20:28:08.568745 sudo[1610]: pam_unix(sudo:session): session closed for user root Sep 4 20:28:08.574226 sshd[1607]: pam_unix(sshd:session): session closed for user core Sep 4 20:28:08.586727 systemd[1]: sshd@4-64.23.130.28:22-139.178.68.195:56958.service: Deactivated successfully. Sep 4 20:28:08.590477 systemd[1]: session-5.scope: Deactivated successfully. Sep 4 20:28:08.594026 systemd-logind[1446]: Session 5 logged out. Waiting for processes to exit. Sep 4 20:28:08.599364 systemd[1]: Started sshd@5-64.23.130.28:22-139.178.68.195:56962.service - OpenSSH per-connection server daemon (139.178.68.195:56962). Sep 4 20:28:08.602690 systemd-logind[1446]: Removed session 5. Sep 4 20:28:08.648900 sshd[1615]: Accepted publickey for core from 139.178.68.195 port 56962 ssh2: RSA SHA256:6m86ErQYPfwi49NZRVftW/USO9k3FxgPtHd71f+HMpY Sep 4 20:28:08.651519 sshd[1615]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 20:28:08.658218 systemd-logind[1446]: New session 6 of user core. Sep 4 20:28:08.666197 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 4 20:28:08.729469 sudo[1619]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 4 20:28:08.729817 sudo[1619]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 4 20:28:08.735047 sudo[1619]: pam_unix(sudo:session): session closed for user root Sep 4 20:28:08.742466 sudo[1618]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 4 20:28:08.742844 sudo[1618]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 4 20:28:08.760311 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 4 20:28:08.775514 auditctl[1622]: No rules Sep 4 20:28:08.776179 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 20:28:08.776484 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 4 20:28:08.783492 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 4 20:28:08.828794 augenrules[1640]: No rules Sep 4 20:28:08.830303 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 4 20:28:08.833731 sudo[1618]: pam_unix(sudo:session): session closed for user root Sep 4 20:28:08.840168 sshd[1615]: pam_unix(sshd:session): session closed for user core Sep 4 20:28:08.853324 systemd[1]: sshd@5-64.23.130.28:22-139.178.68.195:56962.service: Deactivated successfully. Sep 4 20:28:08.856718 systemd[1]: session-6.scope: Deactivated successfully. Sep 4 20:28:08.860019 systemd-logind[1446]: Session 6 logged out. Waiting for processes to exit. Sep 4 20:28:08.866324 systemd[1]: Started sshd@6-64.23.130.28:22-139.178.68.195:56970.service - OpenSSH per-connection server daemon (139.178.68.195:56970). Sep 4 20:28:08.867858 systemd-logind[1446]: Removed session 6. Sep 4 20:28:08.920847 sshd[1648]: Accepted publickey for core from 139.178.68.195 port 56970 ssh2: RSA SHA256:6m86ErQYPfwi49NZRVftW/USO9k3FxgPtHd71f+HMpY Sep 4 20:28:08.922961 sshd[1648]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 20:28:08.931533 systemd-logind[1446]: New session 7 of user core. Sep 4 20:28:08.938161 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 4 20:28:08.999899 sudo[1651]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 4 20:28:09.000220 sudo[1651]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 4 20:28:09.184295 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 4 20:28:09.187167 (dockerd)[1661]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 4 20:28:09.605100 dockerd[1661]: time="2024-09-04T20:28:09.604282459Z" level=info msg="Starting up" Sep 4 20:28:09.656568 dockerd[1661]: time="2024-09-04T20:28:09.656302476Z" level=info msg="Loading containers: start." Sep 4 20:28:09.802789 kernel: Initializing XFRM netlink socket Sep 4 20:28:09.836341 systemd-timesyncd[1343]: Network configuration changed, trying to establish connection. Sep 4 20:28:09.841794 systemd-timesyncd[1343]: Network configuration changed, trying to establish connection. Sep 4 20:28:09.850626 systemd-timesyncd[1343]: Network configuration changed, trying to establish connection. Sep 4 20:28:09.923952 systemd-networkd[1369]: docker0: Link UP Sep 4 20:28:09.924323 systemd-timesyncd[1343]: Network configuration changed, trying to establish connection. Sep 4 20:28:09.942284 dockerd[1661]: time="2024-09-04T20:28:09.942135839Z" level=info msg="Loading containers: done." Sep 4 20:28:10.034922 dockerd[1661]: time="2024-09-04T20:28:10.034241715Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 4 20:28:10.034922 dockerd[1661]: time="2024-09-04T20:28:10.034579677Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Sep 4 20:28:10.034922 dockerd[1661]: time="2024-09-04T20:28:10.034833041Z" level=info msg="Daemon has completed initialization" Sep 4 20:28:10.036777 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3477761310-merged.mount: Deactivated successfully. Sep 4 20:28:10.069817 dockerd[1661]: time="2024-09-04T20:28:10.069719382Z" level=info msg="API listen on /run/docker.sock" Sep 4 20:28:10.070118 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 4 20:28:11.191197 containerd[1468]: time="2024-09-04T20:28:11.191139048Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.4\"" Sep 4 20:28:11.779040 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2179723449.mount: Deactivated successfully. Sep 4 20:28:13.474942 containerd[1468]: time="2024-09-04T20:28:13.474847267Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:28:13.476475 containerd[1468]: time="2024-09-04T20:28:13.476407127Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.4: active requests=0, bytes read=32772416" Sep 4 20:28:13.477450 containerd[1468]: time="2024-09-04T20:28:13.476875645Z" level=info msg="ImageCreate event name:\"sha256:8a97b1fb3e2ebd03bf97ce8ae894b3dc8a68ab1f4ecfd0a284921c45c56f5aa4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:28:13.481748 containerd[1468]: time="2024-09-04T20:28:13.481609074Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:7b0c4a959aaee5660e1234452dc3123310231b9f92d29ebd175c86dc9f797ee7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:28:13.484365 containerd[1468]: time="2024-09-04T20:28:13.483796264Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.4\" with image id \"sha256:8a97b1fb3e2ebd03bf97ce8ae894b3dc8a68ab1f4ecfd0a284921c45c56f5aa4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:7b0c4a959aaee5660e1234452dc3123310231b9f92d29ebd175c86dc9f797ee7\", size \"32769216\" in 2.290855971s" Sep 4 20:28:13.484365 containerd[1468]: time="2024-09-04T20:28:13.483866732Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.4\" returns image reference \"sha256:8a97b1fb3e2ebd03bf97ce8ae894b3dc8a68ab1f4ecfd0a284921c45c56f5aa4\"" Sep 4 20:28:13.520596 containerd[1468]: time="2024-09-04T20:28:13.520544610Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.4\"" Sep 4 20:28:16.305536 containerd[1468]: time="2024-09-04T20:28:16.305315757Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:28:16.307801 containerd[1468]: time="2024-09-04T20:28:16.307679803Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.4: active requests=0, bytes read=29594065" Sep 4 20:28:16.308271 containerd[1468]: time="2024-09-04T20:28:16.308208258Z" level=info msg="ImageCreate event name:\"sha256:8398ad49a121d58ecf8a36e8371c0928fdf75eb0a83d28232ab2b39b1c6a9050\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:28:16.313467 containerd[1468]: time="2024-09-04T20:28:16.313366802Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:992cccbf652fa951c1a3d41b0c1033ae0bf64f33da03d50395282c551900af9e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:28:16.316661 containerd[1468]: time="2024-09-04T20:28:16.316329871Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.4\" with image id \"sha256:8398ad49a121d58ecf8a36e8371c0928fdf75eb0a83d28232ab2b39b1c6a9050\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:992cccbf652fa951c1a3d41b0c1033ae0bf64f33da03d50395282c551900af9e\", size \"31144011\" in 2.795468353s" Sep 4 20:28:16.316661 containerd[1468]: time="2024-09-04T20:28:16.316437556Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.4\" returns image reference \"sha256:8398ad49a121d58ecf8a36e8371c0928fdf75eb0a83d28232ab2b39b1c6a9050\"" Sep 4 20:28:16.369971 containerd[1468]: time="2024-09-04T20:28:16.369908338Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.4\"" Sep 4 20:28:16.596480 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 4 20:28:16.605093 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 20:28:16.857086 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 20:28:16.870835 (kubelet)[1873]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 20:28:16.942701 kubelet[1873]: E0904 20:28:16.942571 1873 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 20:28:16.946326 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 20:28:16.946507 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 20:28:17.905422 containerd[1468]: time="2024-09-04T20:28:17.905349521Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:28:17.907230 containerd[1468]: time="2024-09-04T20:28:17.907162687Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.4: active requests=0, bytes read=17780233" Sep 4 20:28:17.907961 containerd[1468]: time="2024-09-04T20:28:17.907917722Z" level=info msg="ImageCreate event name:\"sha256:4939f82ab9ab456e782c06ed37b245127c8a9ac29a72982346a7160f18107833\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:28:17.912080 containerd[1468]: time="2024-09-04T20:28:17.912005487Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:37eaeee5bca8da34ad3d36e37586dd29f5edb1e2927e7644dfb113e70062bda8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:28:17.912985 containerd[1468]: time="2024-09-04T20:28:17.912841257Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.4\" with image id \"sha256:4939f82ab9ab456e782c06ed37b245127c8a9ac29a72982346a7160f18107833\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:37eaeee5bca8da34ad3d36e37586dd29f5edb1e2927e7644dfb113e70062bda8\", size \"19330197\" in 1.542842033s" Sep 4 20:28:17.912985 containerd[1468]: time="2024-09-04T20:28:17.912893517Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.4\" returns image reference \"sha256:4939f82ab9ab456e782c06ed37b245127c8a9ac29a72982346a7160f18107833\"" Sep 4 20:28:17.944292 containerd[1468]: time="2024-09-04T20:28:17.944220491Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.4\"" Sep 4 20:28:19.195429 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount71349815.mount: Deactivated successfully. Sep 4 20:28:19.715194 containerd[1468]: time="2024-09-04T20:28:19.715016346Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:28:19.718810 containerd[1468]: time="2024-09-04T20:28:19.718293827Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.4: active requests=0, bytes read=29037161" Sep 4 20:28:19.718810 containerd[1468]: time="2024-09-04T20:28:19.718359789Z" level=info msg="ImageCreate event name:\"sha256:568d5ba88d944bcd67415d8c358fce615824410f3a43bab2b353336bc3795a10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:28:19.725013 containerd[1468]: time="2024-09-04T20:28:19.724438162Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:33ee1df1ba70e41bf9506d54bb5e64ef5f3ba9fc1b3021aaa4468606a7802acc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:28:19.725725 containerd[1468]: time="2024-09-04T20:28:19.725665547Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.4\" with image id \"sha256:568d5ba88d944bcd67415d8c358fce615824410f3a43bab2b353336bc3795a10\", repo tag \"registry.k8s.io/kube-proxy:v1.30.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:33ee1df1ba70e41bf9506d54bb5e64ef5f3ba9fc1b3021aaa4468606a7802acc\", size \"29036180\" in 1.781383875s" Sep 4 20:28:19.725725 containerd[1468]: time="2024-09-04T20:28:19.725725879Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.4\" returns image reference \"sha256:568d5ba88d944bcd67415d8c358fce615824410f3a43bab2b353336bc3795a10\"" Sep 4 20:28:19.764174 containerd[1468]: time="2024-09-04T20:28:19.764123896Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Sep 4 20:28:20.343053 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount588696675.mount: Deactivated successfully. Sep 4 20:28:21.272595 containerd[1468]: time="2024-09-04T20:28:21.272515790Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:28:21.274162 containerd[1468]: time="2024-09-04T20:28:21.274091663Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Sep 4 20:28:21.276088 containerd[1468]: time="2024-09-04T20:28:21.276005544Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:28:21.281793 containerd[1468]: time="2024-09-04T20:28:21.279951105Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:28:21.282505 containerd[1468]: time="2024-09-04T20:28:21.282436940Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.518072703s" Sep 4 20:28:21.282712 containerd[1468]: time="2024-09-04T20:28:21.282683295Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Sep 4 20:28:21.332609 containerd[1468]: time="2024-09-04T20:28:21.332562770Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Sep 4 20:28:21.870480 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4008398405.mount: Deactivated successfully. Sep 4 20:28:21.883977 containerd[1468]: time="2024-09-04T20:28:21.883888214Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:28:21.885671 containerd[1468]: time="2024-09-04T20:28:21.885356876Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Sep 4 20:28:21.886489 containerd[1468]: time="2024-09-04T20:28:21.886415333Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:28:21.889375 containerd[1468]: time="2024-09-04T20:28:21.889252191Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:28:21.890846 containerd[1468]: time="2024-09-04T20:28:21.890087293Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 557.202111ms" Sep 4 20:28:21.890846 containerd[1468]: time="2024-09-04T20:28:21.890139604Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Sep 4 20:28:21.921404 containerd[1468]: time="2024-09-04T20:28:21.921348252Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Sep 4 20:28:22.455398 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1899806074.mount: Deactivated successfully. Sep 4 20:28:24.421582 containerd[1468]: time="2024-09-04T20:28:24.421480154Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:28:24.424123 containerd[1468]: time="2024-09-04T20:28:24.424016945Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Sep 4 20:28:24.424975 containerd[1468]: time="2024-09-04T20:28:24.424858004Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:28:24.428647 containerd[1468]: time="2024-09-04T20:28:24.428589637Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:28:24.431781 containerd[1468]: time="2024-09-04T20:28:24.431684646Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.510283375s" Sep 4 20:28:24.431781 containerd[1468]: time="2024-09-04T20:28:24.431740768Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Sep 4 20:28:27.094612 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 4 20:28:27.102096 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 20:28:27.236236 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 20:28:27.255544 (kubelet)[2068]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 20:28:27.320211 kubelet[2068]: E0904 20:28:27.320148 2068 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 20:28:27.324850 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 20:28:27.325102 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 20:28:27.615482 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 20:28:27.622133 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 20:28:27.653389 systemd[1]: Reloading requested from client PID 2082 ('systemctl') (unit session-7.scope)... Sep 4 20:28:27.653422 systemd[1]: Reloading... Sep 4 20:28:27.802805 zram_generator::config[2122]: No configuration found. Sep 4 20:28:27.934050 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 20:28:28.020324 systemd[1]: Reloading finished in 366 ms. Sep 4 20:28:28.080409 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 20:28:28.084692 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 20:28:28.085038 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 20:28:28.093328 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 20:28:28.227049 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 20:28:28.245539 (kubelet)[2175]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 20:28:28.303789 kubelet[2175]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 20:28:28.303789 kubelet[2175]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 20:28:28.303789 kubelet[2175]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 20:28:28.303789 kubelet[2175]: I0904 20:28:28.303670 2175 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 20:28:28.468358 kubelet[2175]: I0904 20:28:28.467788 2175 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Sep 4 20:28:28.468358 kubelet[2175]: I0904 20:28:28.467849 2175 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 20:28:28.468358 kubelet[2175]: I0904 20:28:28.468216 2175 server.go:927] "Client rotation is on, will bootstrap in background" Sep 4 20:28:28.490521 kubelet[2175]: I0904 20:28:28.490096 2175 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 20:28:28.491431 kubelet[2175]: E0904 20:28:28.491404 2175 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://64.23.130.28:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 64.23.130.28:6443: connect: connection refused Sep 4 20:28:28.503050 kubelet[2175]: I0904 20:28:28.502997 2175 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 20:28:28.504524 kubelet[2175]: I0904 20:28:28.504426 2175 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 20:28:28.504855 kubelet[2175]: I0904 20:28:28.504513 2175 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3975.2.1-0-09c0a9ae8e","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Sep 4 20:28:28.505538 kubelet[2175]: I0904 20:28:28.505474 2175 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 20:28:28.505538 kubelet[2175]: I0904 20:28:28.505520 2175 container_manager_linux.go:301] "Creating device plugin manager" Sep 4 20:28:28.505766 kubelet[2175]: I0904 20:28:28.505704 2175 state_mem.go:36] "Initialized new in-memory state store" Sep 4 20:28:28.506701 kubelet[2175]: I0904 20:28:28.506654 2175 kubelet.go:400] "Attempting to sync node with API server" Sep 4 20:28:28.507604 kubelet[2175]: I0904 20:28:28.507307 2175 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 20:28:28.507604 kubelet[2175]: I0904 20:28:28.507352 2175 kubelet.go:312] "Adding apiserver pod source" Sep 4 20:28:28.507604 kubelet[2175]: I0904 20:28:28.507382 2175 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 20:28:28.507604 kubelet[2175]: W0904 20:28:28.507421 2175 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://64.23.130.28:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975.2.1-0-09c0a9ae8e&limit=500&resourceVersion=0": dial tcp 64.23.130.28:6443: connect: connection refused Sep 4 20:28:28.507604 kubelet[2175]: E0904 20:28:28.507484 2175 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://64.23.130.28:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975.2.1-0-09c0a9ae8e&limit=500&resourceVersion=0": dial tcp 64.23.130.28:6443: connect: connection refused Sep 4 20:28:28.517366 kubelet[2175]: W0904 20:28:28.517176 2175 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://64.23.130.28:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 64.23.130.28:6443: connect: connection refused Sep 4 20:28:28.517366 kubelet[2175]: E0904 20:28:28.517251 2175 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://64.23.130.28:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 64.23.130.28:6443: connect: connection refused Sep 4 20:28:28.517366 kubelet[2175]: I0904 20:28:28.517357 2175 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Sep 4 20:28:28.519229 kubelet[2175]: I0904 20:28:28.519191 2175 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 20:28:28.519354 kubelet[2175]: W0904 20:28:28.519289 2175 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 4 20:28:28.523789 kubelet[2175]: I0904 20:28:28.522486 2175 server.go:1264] "Started kubelet" Sep 4 20:28:28.523789 kubelet[2175]: I0904 20:28:28.522590 2175 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 20:28:28.523789 kubelet[2175]: I0904 20:28:28.523091 2175 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 20:28:28.525333 kubelet[2175]: I0904 20:28:28.525311 2175 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 20:28:28.527819 kubelet[2175]: I0904 20:28:28.527742 2175 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 20:28:28.528839 kubelet[2175]: I0904 20:28:28.528772 2175 server.go:455] "Adding debug handlers to kubelet server" Sep 4 20:28:28.529844 kubelet[2175]: E0904 20:28:28.529679 2175 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://64.23.130.28:6443/api/v1/namespaces/default/events\": dial tcp 64.23.130.28:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3975.2.1-0-09c0a9ae8e.17f22478ced30180 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3975.2.1-0-09c0a9ae8e,UID:ci-3975.2.1-0-09c0a9ae8e,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3975.2.1-0-09c0a9ae8e,},FirstTimestamp:2024-09-04 20:28:28.520931712 +0000 UTC m=+0.270291661,LastTimestamp:2024-09-04 20:28:28.520931712 +0000 UTC m=+0.270291661,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3975.2.1-0-09c0a9ae8e,}" Sep 4 20:28:28.533061 kubelet[2175]: I0904 20:28:28.532216 2175 volume_manager.go:291] "Starting Kubelet Volume Manager" Sep 4 20:28:28.533061 kubelet[2175]: I0904 20:28:28.532322 2175 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Sep 4 20:28:28.533061 kubelet[2175]: I0904 20:28:28.532398 2175 reconciler.go:26] "Reconciler: start to sync state" Sep 4 20:28:28.533061 kubelet[2175]: W0904 20:28:28.532785 2175 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://64.23.130.28:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.23.130.28:6443: connect: connection refused Sep 4 20:28:28.533061 kubelet[2175]: E0904 20:28:28.532829 2175 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://64.23.130.28:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.23.130.28:6443: connect: connection refused Sep 4 20:28:28.533061 kubelet[2175]: E0904 20:28:28.533039 2175 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.130.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.2.1-0-09c0a9ae8e?timeout=10s\": dial tcp 64.23.130.28:6443: connect: connection refused" interval="200ms" Sep 4 20:28:28.539007 kubelet[2175]: I0904 20:28:28.538965 2175 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 20:28:28.541328 kubelet[2175]: I0904 20:28:28.541296 2175 factory.go:221] Registration of the containerd container factory successfully Sep 4 20:28:28.541328 kubelet[2175]: I0904 20:28:28.541316 2175 factory.go:221] Registration of the systemd container factory successfully Sep 4 20:28:28.556973 kubelet[2175]: I0904 20:28:28.556653 2175 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 20:28:28.558948 kubelet[2175]: I0904 20:28:28.558848 2175 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 20:28:28.568900 kubelet[2175]: I0904 20:28:28.565494 2175 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 20:28:28.568900 kubelet[2175]: I0904 20:28:28.565518 2175 kubelet.go:2337] "Starting kubelet main sync loop" Sep 4 20:28:28.568900 kubelet[2175]: E0904 20:28:28.565574 2175 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 20:28:28.568900 kubelet[2175]: E0904 20:28:28.565433 2175 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 20:28:28.571930 kubelet[2175]: W0904 20:28:28.571417 2175 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://64.23.130.28:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.23.130.28:6443: connect: connection refused Sep 4 20:28:28.571930 kubelet[2175]: E0904 20:28:28.571582 2175 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://64.23.130.28:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.23.130.28:6443: connect: connection refused Sep 4 20:28:28.578208 kubelet[2175]: I0904 20:28:28.578174 2175 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 20:28:28.578208 kubelet[2175]: I0904 20:28:28.578195 2175 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 20:28:28.578208 kubelet[2175]: I0904 20:28:28.578219 2175 state_mem.go:36] "Initialized new in-memory state store" Sep 4 20:28:28.580282 kubelet[2175]: I0904 20:28:28.580243 2175 policy_none.go:49] "None policy: Start" Sep 4 20:28:28.581625 kubelet[2175]: I0904 20:28:28.581163 2175 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 4 20:28:28.581625 kubelet[2175]: I0904 20:28:28.581195 2175 state_mem.go:35] "Initializing new in-memory state store" Sep 4 20:28:28.591487 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 4 20:28:28.601574 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 4 20:28:28.606900 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 4 20:28:28.617121 kubelet[2175]: I0904 20:28:28.615113 2175 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 20:28:28.617121 kubelet[2175]: I0904 20:28:28.615334 2175 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 4 20:28:28.617121 kubelet[2175]: I0904 20:28:28.615467 2175 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 20:28:28.619121 kubelet[2175]: E0904 20:28:28.618563 2175 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3975.2.1-0-09c0a9ae8e\" not found" Sep 4 20:28:28.634381 kubelet[2175]: I0904 20:28:28.634337 2175 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975.2.1-0-09c0a9ae8e" Sep 4 20:28:28.636000 kubelet[2175]: E0904 20:28:28.635952 2175 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://64.23.130.28:6443/api/v1/nodes\": dial tcp 64.23.130.28:6443: connect: connection refused" node="ci-3975.2.1-0-09c0a9ae8e" Sep 4 20:28:28.666746 kubelet[2175]: I0904 20:28:28.666207 2175 topology_manager.go:215] "Topology Admit Handler" podUID="d7dfdf6af20f66f7f0fc293b0c7dd122" podNamespace="kube-system" podName="kube-scheduler-ci-3975.2.1-0-09c0a9ae8e" Sep 4 20:28:28.667500 kubelet[2175]: I0904 20:28:28.667456 2175 topology_manager.go:215] "Topology Admit Handler" podUID="e98887c8461fd2b8b5feee5679a72c77" podNamespace="kube-system" podName="kube-apiserver-ci-3975.2.1-0-09c0a9ae8e" Sep 4 20:28:28.669082 kubelet[2175]: I0904 20:28:28.668958 2175 topology_manager.go:215] "Topology Admit Handler" podUID="3fa6c7c4e79f78e84cc02bd80e2a5a8d" podNamespace="kube-system" podName="kube-controller-manager-ci-3975.2.1-0-09c0a9ae8e" Sep 4 20:28:28.678698 systemd[1]: Created slice kubepods-burstable-podd7dfdf6af20f66f7f0fc293b0c7dd122.slice - libcontainer container kubepods-burstable-podd7dfdf6af20f66f7f0fc293b0c7dd122.slice. Sep 4 20:28:28.699034 systemd[1]: Created slice kubepods-burstable-pode98887c8461fd2b8b5feee5679a72c77.slice - libcontainer container kubepods-burstable-pode98887c8461fd2b8b5feee5679a72c77.slice. Sep 4 20:28:28.716805 systemd[1]: Created slice kubepods-burstable-pod3fa6c7c4e79f78e84cc02bd80e2a5a8d.slice - libcontainer container kubepods-burstable-pod3fa6c7c4e79f78e84cc02bd80e2a5a8d.slice. Sep 4 20:28:28.734273 kubelet[2175]: I0904 20:28:28.733909 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e98887c8461fd2b8b5feee5679a72c77-k8s-certs\") pod \"kube-apiserver-ci-3975.2.1-0-09c0a9ae8e\" (UID: \"e98887c8461fd2b8b5feee5679a72c77\") " pod="kube-system/kube-apiserver-ci-3975.2.1-0-09c0a9ae8e" Sep 4 20:28:28.734273 kubelet[2175]: I0904 20:28:28.733956 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3fa6c7c4e79f78e84cc02bd80e2a5a8d-flexvolume-dir\") pod \"kube-controller-manager-ci-3975.2.1-0-09c0a9ae8e\" (UID: \"3fa6c7c4e79f78e84cc02bd80e2a5a8d\") " pod="kube-system/kube-controller-manager-ci-3975.2.1-0-09c0a9ae8e" Sep 4 20:28:28.734273 kubelet[2175]: I0904 20:28:28.733979 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3fa6c7c4e79f78e84cc02bd80e2a5a8d-k8s-certs\") pod \"kube-controller-manager-ci-3975.2.1-0-09c0a9ae8e\" (UID: \"3fa6c7c4e79f78e84cc02bd80e2a5a8d\") " pod="kube-system/kube-controller-manager-ci-3975.2.1-0-09c0a9ae8e" Sep 4 20:28:28.734273 kubelet[2175]: I0904 20:28:28.733994 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3fa6c7c4e79f78e84cc02bd80e2a5a8d-kubeconfig\") pod \"kube-controller-manager-ci-3975.2.1-0-09c0a9ae8e\" (UID: \"3fa6c7c4e79f78e84cc02bd80e2a5a8d\") " pod="kube-system/kube-controller-manager-ci-3975.2.1-0-09c0a9ae8e" Sep 4 20:28:28.734273 kubelet[2175]: I0904 20:28:28.734010 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d7dfdf6af20f66f7f0fc293b0c7dd122-kubeconfig\") pod \"kube-scheduler-ci-3975.2.1-0-09c0a9ae8e\" (UID: \"d7dfdf6af20f66f7f0fc293b0c7dd122\") " pod="kube-system/kube-scheduler-ci-3975.2.1-0-09c0a9ae8e" Sep 4 20:28:28.734629 kubelet[2175]: I0904 20:28:28.734026 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e98887c8461fd2b8b5feee5679a72c77-ca-certs\") pod \"kube-apiserver-ci-3975.2.1-0-09c0a9ae8e\" (UID: \"e98887c8461fd2b8b5feee5679a72c77\") " pod="kube-system/kube-apiserver-ci-3975.2.1-0-09c0a9ae8e" Sep 4 20:28:28.734629 kubelet[2175]: I0904 20:28:28.734044 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e98887c8461fd2b8b5feee5679a72c77-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3975.2.1-0-09c0a9ae8e\" (UID: \"e98887c8461fd2b8b5feee5679a72c77\") " pod="kube-system/kube-apiserver-ci-3975.2.1-0-09c0a9ae8e" Sep 4 20:28:28.734629 kubelet[2175]: I0904 20:28:28.734063 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3fa6c7c4e79f78e84cc02bd80e2a5a8d-ca-certs\") pod \"kube-controller-manager-ci-3975.2.1-0-09c0a9ae8e\" (UID: \"3fa6c7c4e79f78e84cc02bd80e2a5a8d\") " pod="kube-system/kube-controller-manager-ci-3975.2.1-0-09c0a9ae8e" Sep 4 20:28:28.734629 kubelet[2175]: I0904 20:28:28.734081 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3fa6c7c4e79f78e84cc02bd80e2a5a8d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3975.2.1-0-09c0a9ae8e\" (UID: \"3fa6c7c4e79f78e84cc02bd80e2a5a8d\") " pod="kube-system/kube-controller-manager-ci-3975.2.1-0-09c0a9ae8e" Sep 4 20:28:28.734629 kubelet[2175]: E0904 20:28:28.734214 2175 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.130.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.2.1-0-09c0a9ae8e?timeout=10s\": dial tcp 64.23.130.28:6443: connect: connection refused" interval="400ms" Sep 4 20:28:28.838344 kubelet[2175]: I0904 20:28:28.838162 2175 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975.2.1-0-09c0a9ae8e" Sep 4 20:28:28.838866 kubelet[2175]: E0904 20:28:28.838723 2175 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://64.23.130.28:6443/api/v1/nodes\": dial tcp 64.23.130.28:6443: connect: connection refused" node="ci-3975.2.1-0-09c0a9ae8e" Sep 4 20:28:28.995610 kubelet[2175]: E0904 20:28:28.995102 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 4 20:28:28.998627 containerd[1468]: time="2024-09-04T20:28:28.998215923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3975.2.1-0-09c0a9ae8e,Uid:d7dfdf6af20f66f7f0fc293b0c7dd122,Namespace:kube-system,Attempt:0,}" Sep 4 20:28:29.004335 kubelet[2175]: E0904 20:28:29.003216 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 4 20:28:29.009584 containerd[1468]: time="2024-09-04T20:28:29.009518854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3975.2.1-0-09c0a9ae8e,Uid:e98887c8461fd2b8b5feee5679a72c77,Namespace:kube-system,Attempt:0,}" Sep 4 20:28:29.023132 kubelet[2175]: E0904 20:28:29.022722 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 4 20:28:29.023707 containerd[1468]: time="2024-09-04T20:28:29.023670947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3975.2.1-0-09c0a9ae8e,Uid:3fa6c7c4e79f78e84cc02bd80e2a5a8d,Namespace:kube-system,Attempt:0,}" Sep 4 20:28:29.135854 kubelet[2175]: E0904 20:28:29.135716 2175 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.130.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.2.1-0-09c0a9ae8e?timeout=10s\": dial tcp 64.23.130.28:6443: connect: connection refused" interval="800ms" Sep 4 20:28:29.240086 kubelet[2175]: I0904 20:28:29.240033 2175 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975.2.1-0-09c0a9ae8e" Sep 4 20:28:29.240562 kubelet[2175]: E0904 20:28:29.240505 2175 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://64.23.130.28:6443/api/v1/nodes\": dial tcp 64.23.130.28:6443: connect: connection refused" node="ci-3975.2.1-0-09c0a9ae8e" Sep 4 20:28:29.521278 kubelet[2175]: W0904 20:28:29.521046 2175 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://64.23.130.28:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 64.23.130.28:6443: connect: connection refused Sep 4 20:28:29.521278 kubelet[2175]: E0904 20:28:29.521131 2175 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://64.23.130.28:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 64.23.130.28:6443: connect: connection refused Sep 4 20:28:29.582119 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2284616307.mount: Deactivated successfully. Sep 4 20:28:29.588781 containerd[1468]: time="2024-09-04T20:28:29.588699650Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 20:28:29.590634 containerd[1468]: time="2024-09-04T20:28:29.590308887Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 20:28:29.591353 containerd[1468]: time="2024-09-04T20:28:29.591292286Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 20:28:29.592455 containerd[1468]: time="2024-09-04T20:28:29.592407039Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 20:28:29.593292 containerd[1468]: time="2024-09-04T20:28:29.593151777Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Sep 4 20:28:29.594160 containerd[1468]: time="2024-09-04T20:28:29.593974482Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 20:28:29.594160 containerd[1468]: time="2024-09-04T20:28:29.594107291Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 20:28:29.601334 containerd[1468]: time="2024-09-04T20:28:29.599862764Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 20:28:29.601334 containerd[1468]: time="2024-09-04T20:28:29.601124133Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 602.739529ms" Sep 4 20:28:29.603473 containerd[1468]: time="2024-09-04T20:28:29.603414077Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 579.497354ms" Sep 4 20:28:29.607843 containerd[1468]: time="2024-09-04T20:28:29.607742439Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 598.074765ms" Sep 4 20:28:29.658651 kubelet[2175]: W0904 20:28:29.658568 2175 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://64.23.130.28:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975.2.1-0-09c0a9ae8e&limit=500&resourceVersion=0": dial tcp 64.23.130.28:6443: connect: connection refused Sep 4 20:28:29.659029 kubelet[2175]: E0904 20:28:29.659007 2175 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://64.23.130.28:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975.2.1-0-09c0a9ae8e&limit=500&resourceVersion=0": dial tcp 64.23.130.28:6443: connect: connection refused Sep 4 20:28:29.801001 containerd[1468]: time="2024-09-04T20:28:29.800588077Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 20:28:29.801001 containerd[1468]: time="2024-09-04T20:28:29.800677330Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 20:28:29.801001 containerd[1468]: time="2024-09-04T20:28:29.800706897Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 20:28:29.801001 containerd[1468]: time="2024-09-04T20:28:29.800728967Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 20:28:29.817794 containerd[1468]: time="2024-09-04T20:28:29.817410381Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 20:28:29.820135 containerd[1468]: time="2024-09-04T20:28:29.819059029Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 20:28:29.820135 containerd[1468]: time="2024-09-04T20:28:29.819126462Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 20:28:29.820135 containerd[1468]: time="2024-09-04T20:28:29.819149188Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 20:28:29.826665 containerd[1468]: time="2024-09-04T20:28:29.826131945Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 20:28:29.826665 containerd[1468]: time="2024-09-04T20:28:29.826235694Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 20:28:29.826665 containerd[1468]: time="2024-09-04T20:28:29.826265435Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 20:28:29.826665 containerd[1468]: time="2024-09-04T20:28:29.826286432Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 20:28:29.844122 systemd[1]: Started cri-containerd-120647a40e5682950b36b4eaf62319729383e29222607dc49342138096eae7c5.scope - libcontainer container 120647a40e5682950b36b4eaf62319729383e29222607dc49342138096eae7c5. Sep 4 20:28:29.876272 systemd[1]: Started cri-containerd-9c7b2fb6c4ca6ff8b16974db0866923a3b630d7d50c21181321c6b1a262a13a0.scope - libcontainer container 9c7b2fb6c4ca6ff8b16974db0866923a3b630d7d50c21181321c6b1a262a13a0. Sep 4 20:28:29.897065 systemd[1]: Started cri-containerd-945bc1c27f269f80fce25e1d73e45a6bb36261a57b782da4d31346acbb83f72d.scope - libcontainer container 945bc1c27f269f80fce25e1d73e45a6bb36261a57b782da4d31346acbb83f72d. Sep 4 20:28:29.919351 kubelet[2175]: W0904 20:28:29.919130 2175 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://64.23.130.28:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.23.130.28:6443: connect: connection refused Sep 4 20:28:29.919711 kubelet[2175]: E0904 20:28:29.919667 2175 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://64.23.130.28:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.23.130.28:6443: connect: connection refused Sep 4 20:28:29.938566 kubelet[2175]: E0904 20:28:29.938452 2175 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.130.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.2.1-0-09c0a9ae8e?timeout=10s\": dial tcp 64.23.130.28:6443: connect: connection refused" interval="1.6s" Sep 4 20:28:29.957166 containerd[1468]: time="2024-09-04T20:28:29.956895796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3975.2.1-0-09c0a9ae8e,Uid:d7dfdf6af20f66f7f0fc293b0c7dd122,Namespace:kube-system,Attempt:0,} returns sandbox id \"120647a40e5682950b36b4eaf62319729383e29222607dc49342138096eae7c5\"" Sep 4 20:28:29.974819 kubelet[2175]: E0904 20:28:29.974412 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 4 20:28:29.985913 containerd[1468]: time="2024-09-04T20:28:29.985834457Z" level=info msg="CreateContainer within sandbox \"120647a40e5682950b36b4eaf62319729383e29222607dc49342138096eae7c5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 4 20:28:29.996786 kubelet[2175]: W0904 20:28:29.996375 2175 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://64.23.130.28:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.23.130.28:6443: connect: connection refused Sep 4 20:28:29.996786 kubelet[2175]: E0904 20:28:29.996474 2175 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://64.23.130.28:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.23.130.28:6443: connect: connection refused Sep 4 20:28:30.000147 containerd[1468]: time="2024-09-04T20:28:30.000089625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3975.2.1-0-09c0a9ae8e,Uid:e98887c8461fd2b8b5feee5679a72c77,Namespace:kube-system,Attempt:0,} returns sandbox id \"9c7b2fb6c4ca6ff8b16974db0866923a3b630d7d50c21181321c6b1a262a13a0\"" Sep 4 20:28:30.003068 kubelet[2175]: E0904 20:28:30.003021 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 4 20:28:30.009623 containerd[1468]: time="2024-09-04T20:28:30.009421719Z" level=info msg="CreateContainer within sandbox \"9c7b2fb6c4ca6ff8b16974db0866923a3b630d7d50c21181321c6b1a262a13a0\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 4 20:28:30.013554 containerd[1468]: time="2024-09-04T20:28:30.013495313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3975.2.1-0-09c0a9ae8e,Uid:3fa6c7c4e79f78e84cc02bd80e2a5a8d,Namespace:kube-system,Attempt:0,} returns sandbox id \"945bc1c27f269f80fce25e1d73e45a6bb36261a57b782da4d31346acbb83f72d\"" Sep 4 20:28:30.018810 kubelet[2175]: E0904 20:28:30.018483 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 4 20:28:30.023129 containerd[1468]: time="2024-09-04T20:28:30.023061238Z" level=info msg="CreateContainer within sandbox \"945bc1c27f269f80fce25e1d73e45a6bb36261a57b782da4d31346acbb83f72d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 4 20:28:30.023999 containerd[1468]: time="2024-09-04T20:28:30.023964490Z" level=info msg="CreateContainer within sandbox \"120647a40e5682950b36b4eaf62319729383e29222607dc49342138096eae7c5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6ca9ac7f22ce55c0a18023ca5a56e6bfb1ee60bda22ea6adfe3e8fa3d2b6a40a\"" Sep 4 20:28:30.025849 containerd[1468]: time="2024-09-04T20:28:30.025398414Z" level=info msg="StartContainer for \"6ca9ac7f22ce55c0a18023ca5a56e6bfb1ee60bda22ea6adfe3e8fa3d2b6a40a\"" Sep 4 20:28:30.033994 containerd[1468]: time="2024-09-04T20:28:30.033925976Z" level=info msg="CreateContainer within sandbox \"9c7b2fb6c4ca6ff8b16974db0866923a3b630d7d50c21181321c6b1a262a13a0\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4a4f25293b013d542060c70e5052eba8d46499348406ad90d41d1a930f0c1cce\"" Sep 4 20:28:30.034908 containerd[1468]: time="2024-09-04T20:28:30.034584250Z" level=info msg="StartContainer for \"4a4f25293b013d542060c70e5052eba8d46499348406ad90d41d1a930f0c1cce\"" Sep 4 20:28:30.042357 kubelet[2175]: I0904 20:28:30.042321 2175 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975.2.1-0-09c0a9ae8e" Sep 4 20:28:30.043480 kubelet[2175]: E0904 20:28:30.043279 2175 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://64.23.130.28:6443/api/v1/nodes\": dial tcp 64.23.130.28:6443: connect: connection refused" node="ci-3975.2.1-0-09c0a9ae8e" Sep 4 20:28:30.056453 containerd[1468]: time="2024-09-04T20:28:30.056073535Z" level=info msg="CreateContainer within sandbox \"945bc1c27f269f80fce25e1d73e45a6bb36261a57b782da4d31346acbb83f72d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0fe8a73fb4f6230ea9740695b0e0bca0e604dd5fbc79f5ba640712a14debca11\"" Sep 4 20:28:30.058459 containerd[1468]: time="2024-09-04T20:28:30.057979643Z" level=info msg="StartContainer for \"0fe8a73fb4f6230ea9740695b0e0bca0e604dd5fbc79f5ba640712a14debca11\"" Sep 4 20:28:30.085258 systemd[1]: Started cri-containerd-6ca9ac7f22ce55c0a18023ca5a56e6bfb1ee60bda22ea6adfe3e8fa3d2b6a40a.scope - libcontainer container 6ca9ac7f22ce55c0a18023ca5a56e6bfb1ee60bda22ea6adfe3e8fa3d2b6a40a. Sep 4 20:28:30.097042 systemd[1]: Started cri-containerd-4a4f25293b013d542060c70e5052eba8d46499348406ad90d41d1a930f0c1cce.scope - libcontainer container 4a4f25293b013d542060c70e5052eba8d46499348406ad90d41d1a930f0c1cce. Sep 4 20:28:30.111129 systemd[1]: Started cri-containerd-0fe8a73fb4f6230ea9740695b0e0bca0e604dd5fbc79f5ba640712a14debca11.scope - libcontainer container 0fe8a73fb4f6230ea9740695b0e0bca0e604dd5fbc79f5ba640712a14debca11. Sep 4 20:28:30.180630 containerd[1468]: time="2024-09-04T20:28:30.180443075Z" level=info msg="StartContainer for \"4a4f25293b013d542060c70e5052eba8d46499348406ad90d41d1a930f0c1cce\" returns successfully" Sep 4 20:28:30.215065 containerd[1468]: time="2024-09-04T20:28:30.214072109Z" level=info msg="StartContainer for \"6ca9ac7f22ce55c0a18023ca5a56e6bfb1ee60bda22ea6adfe3e8fa3d2b6a40a\" returns successfully" Sep 4 20:28:30.225644 containerd[1468]: time="2024-09-04T20:28:30.225491227Z" level=info msg="StartContainer for \"0fe8a73fb4f6230ea9740695b0e0bca0e604dd5fbc79f5ba640712a14debca11\" returns successfully" Sep 4 20:28:30.602811 kubelet[2175]: E0904 20:28:30.602157 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 4 20:28:30.617068 kubelet[2175]: E0904 20:28:30.613546 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 4 20:28:30.620940 kubelet[2175]: E0904 20:28:30.620900 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 4 20:28:31.621803 kubelet[2175]: E0904 20:28:31.620044 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 4 20:28:31.644977 kubelet[2175]: I0904 20:28:31.644936 2175 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975.2.1-0-09c0a9ae8e" Sep 4 20:28:32.102760 kubelet[2175]: E0904 20:28:32.102685 2175 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3975.2.1-0-09c0a9ae8e\" not found" node="ci-3975.2.1-0-09c0a9ae8e" Sep 4 20:28:32.173076 kubelet[2175]: I0904 20:28:32.172845 2175 kubelet_node_status.go:76] "Successfully registered node" node="ci-3975.2.1-0-09c0a9ae8e" Sep 4 20:28:32.203434 kubelet[2175]: E0904 20:28:32.203382 2175 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3975.2.1-0-09c0a9ae8e\" not found" Sep 4 20:28:32.303739 kubelet[2175]: E0904 20:28:32.303679 2175 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3975.2.1-0-09c0a9ae8e\" not found" Sep 4 20:28:32.404833 kubelet[2175]: E0904 20:28:32.404275 2175 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3975.2.1-0-09c0a9ae8e\" not found" Sep 4 20:28:32.505197 kubelet[2175]: E0904 20:28:32.505134 2175 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3975.2.1-0-09c0a9ae8e\" not found" Sep 4 20:28:32.606009 kubelet[2175]: E0904 20:28:32.605942 2175 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3975.2.1-0-09c0a9ae8e\" not found" Sep 4 20:28:33.510278 kubelet[2175]: I0904 20:28:33.510208 2175 apiserver.go:52] "Watching apiserver" Sep 4 20:28:33.532520 kubelet[2175]: I0904 20:28:33.532483 2175 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Sep 4 20:28:34.195235 systemd[1]: Reloading requested from client PID 2447 ('systemctl') (unit session-7.scope)... Sep 4 20:28:34.195254 systemd[1]: Reloading... Sep 4 20:28:34.315797 zram_generator::config[2485]: No configuration found. Sep 4 20:28:34.498890 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 20:28:34.619149 systemd[1]: Reloading finished in 423 ms. Sep 4 20:28:34.672158 kubelet[2175]: E0904 20:28:34.672024 2175 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{ci-3975.2.1-0-09c0a9ae8e.17f22478ced30180 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3975.2.1-0-09c0a9ae8e,UID:ci-3975.2.1-0-09c0a9ae8e,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3975.2.1-0-09c0a9ae8e,},FirstTimestamp:2024-09-04 20:28:28.520931712 +0000 UTC m=+0.270291661,LastTimestamp:2024-09-04 20:28:28.520931712 +0000 UTC m=+0.270291661,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3975.2.1-0-09c0a9ae8e,}" Sep 4 20:28:34.672770 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 20:28:34.689571 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 20:28:34.689878 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 20:28:34.697381 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 20:28:34.863071 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 20:28:34.866043 (kubelet)[2535]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 20:28:34.941184 kubelet[2535]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 20:28:34.943814 kubelet[2535]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 20:28:34.943814 kubelet[2535]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 20:28:34.943814 kubelet[2535]: I0904 20:28:34.942407 2535 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 20:28:34.948064 kubelet[2535]: I0904 20:28:34.948007 2535 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Sep 4 20:28:34.948064 kubelet[2535]: I0904 20:28:34.948036 2535 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 20:28:34.948296 kubelet[2535]: I0904 20:28:34.948261 2535 server.go:927] "Client rotation is on, will bootstrap in background" Sep 4 20:28:34.949902 kubelet[2535]: I0904 20:28:34.949870 2535 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 4 20:28:34.953873 kubelet[2535]: I0904 20:28:34.953215 2535 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 20:28:34.964737 kubelet[2535]: I0904 20:28:34.964476 2535 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 20:28:34.964737 kubelet[2535]: I0904 20:28:34.964803 2535 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 20:28:34.965440 kubelet[2535]: I0904 20:28:34.964844 2535 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3975.2.1-0-09c0a9ae8e","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Sep 4 20:28:34.965440 kubelet[2535]: I0904 20:28:34.965170 2535 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 20:28:34.965440 kubelet[2535]: I0904 20:28:34.965182 2535 container_manager_linux.go:301] "Creating device plugin manager" Sep 4 20:28:34.965440 kubelet[2535]: I0904 20:28:34.965232 2535 state_mem.go:36] "Initialized new in-memory state store" Sep 4 20:28:34.965440 kubelet[2535]: I0904 20:28:34.965346 2535 kubelet.go:400] "Attempting to sync node with API server" Sep 4 20:28:34.966877 kubelet[2535]: I0904 20:28:34.965358 2535 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 20:28:34.966877 kubelet[2535]: I0904 20:28:34.965383 2535 kubelet.go:312] "Adding apiserver pod source" Sep 4 20:28:34.966877 kubelet[2535]: I0904 20:28:34.965400 2535 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 20:28:34.971774 kubelet[2535]: I0904 20:28:34.970921 2535 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Sep 4 20:28:34.971977 kubelet[2535]: I0904 20:28:34.971957 2535 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 20:28:34.973849 kubelet[2535]: I0904 20:28:34.973228 2535 server.go:1264] "Started kubelet" Sep 4 20:28:34.973849 kubelet[2535]: I0904 20:28:34.973441 2535 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 20:28:34.975407 kubelet[2535]: I0904 20:28:34.974199 2535 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 20:28:34.977792 kubelet[2535]: I0904 20:28:34.975967 2535 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 20:28:34.977792 kubelet[2535]: I0904 20:28:34.974515 2535 server.go:455] "Adding debug handlers to kubelet server" Sep 4 20:28:34.977792 kubelet[2535]: I0904 20:28:34.976019 2535 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 20:28:34.994802 kubelet[2535]: I0904 20:28:34.994722 2535 volume_manager.go:291] "Starting Kubelet Volume Manager" Sep 4 20:28:34.997701 kubelet[2535]: I0904 20:28:34.997664 2535 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Sep 4 20:28:34.997890 kubelet[2535]: I0904 20:28:34.997860 2535 reconciler.go:26] "Reconciler: start to sync state" Sep 4 20:28:35.002396 kubelet[2535]: I0904 20:28:35.001116 2535 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 20:28:35.002717 kubelet[2535]: I0904 20:28:35.002687 2535 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 20:28:35.002824 kubelet[2535]: I0904 20:28:35.002727 2535 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 20:28:35.002824 kubelet[2535]: I0904 20:28:35.002748 2535 kubelet.go:2337] "Starting kubelet main sync loop" Sep 4 20:28:35.002941 kubelet[2535]: E0904 20:28:35.002845 2535 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 20:28:35.011686 kubelet[2535]: I0904 20:28:35.011605 2535 factory.go:221] Registration of the systemd container factory successfully Sep 4 20:28:35.012911 kubelet[2535]: I0904 20:28:35.012873 2535 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 20:28:35.017210 kubelet[2535]: E0904 20:28:35.017166 2535 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 20:28:35.018280 kubelet[2535]: I0904 20:28:35.018241 2535 factory.go:221] Registration of the containerd container factory successfully Sep 4 20:28:35.074533 kubelet[2535]: I0904 20:28:35.074504 2535 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 20:28:35.075175 kubelet[2535]: I0904 20:28:35.074700 2535 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 20:28:35.075175 kubelet[2535]: I0904 20:28:35.074722 2535 state_mem.go:36] "Initialized new in-memory state store" Sep 4 20:28:35.075175 kubelet[2535]: I0904 20:28:35.074940 2535 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 4 20:28:35.075175 kubelet[2535]: I0904 20:28:35.074951 2535 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 4 20:28:35.075175 kubelet[2535]: I0904 20:28:35.074970 2535 policy_none.go:49] "None policy: Start" Sep 4 20:28:35.076301 kubelet[2535]: I0904 20:28:35.075838 2535 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 4 20:28:35.076301 kubelet[2535]: I0904 20:28:35.075861 2535 state_mem.go:35] "Initializing new in-memory state store" Sep 4 20:28:35.076301 kubelet[2535]: I0904 20:28:35.075992 2535 state_mem.go:75] "Updated machine memory state" Sep 4 20:28:35.080835 kubelet[2535]: I0904 20:28:35.080809 2535 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 20:28:35.082349 kubelet[2535]: I0904 20:28:35.082291 2535 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 4 20:28:35.084443 kubelet[2535]: I0904 20:28:35.084350 2535 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 20:28:35.096588 kubelet[2535]: I0904 20:28:35.096172 2535 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975.2.1-0-09c0a9ae8e" Sep 4 20:28:35.103146 kubelet[2535]: I0904 20:28:35.103053 2535 topology_manager.go:215] "Topology Admit Handler" podUID="e98887c8461fd2b8b5feee5679a72c77" podNamespace="kube-system" podName="kube-apiserver-ci-3975.2.1-0-09c0a9ae8e" Sep 4 20:28:35.103146 kubelet[2535]: I0904 20:28:35.103175 2535 topology_manager.go:215] "Topology Admit Handler" podUID="3fa6c7c4e79f78e84cc02bd80e2a5a8d" podNamespace="kube-system" podName="kube-controller-manager-ci-3975.2.1-0-09c0a9ae8e" Sep 4 20:28:35.103564 kubelet[2535]: I0904 20:28:35.103227 2535 topology_manager.go:215] "Topology Admit Handler" podUID="d7dfdf6af20f66f7f0fc293b0c7dd122" podNamespace="kube-system" podName="kube-scheduler-ci-3975.2.1-0-09c0a9ae8e" Sep 4 20:28:35.121889 kubelet[2535]: I0904 20:28:35.121218 2535 kubelet_node_status.go:112] "Node was previously registered" node="ci-3975.2.1-0-09c0a9ae8e" Sep 4 20:28:35.124174 kubelet[2535]: I0904 20:28:35.123727 2535 kubelet_node_status.go:76] "Successfully registered node" node="ci-3975.2.1-0-09c0a9ae8e" Sep 4 20:28:35.129784 kubelet[2535]: W0904 20:28:35.129015 2535 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 4 20:28:35.129983 kubelet[2535]: W0904 20:28:35.129745 2535 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 4 20:28:35.129983 kubelet[2535]: W0904 20:28:35.129947 2535 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 4 20:28:35.199575 kubelet[2535]: I0904 20:28:35.199195 2535 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e98887c8461fd2b8b5feee5679a72c77-ca-certs\") pod \"kube-apiserver-ci-3975.2.1-0-09c0a9ae8e\" (UID: \"e98887c8461fd2b8b5feee5679a72c77\") " pod="kube-system/kube-apiserver-ci-3975.2.1-0-09c0a9ae8e" Sep 4 20:28:35.199575 kubelet[2535]: I0904 20:28:35.199241 2535 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e98887c8461fd2b8b5feee5679a72c77-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3975.2.1-0-09c0a9ae8e\" (UID: \"e98887c8461fd2b8b5feee5679a72c77\") " pod="kube-system/kube-apiserver-ci-3975.2.1-0-09c0a9ae8e" Sep 4 20:28:35.199575 kubelet[2535]: I0904 20:28:35.199271 2535 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3fa6c7c4e79f78e84cc02bd80e2a5a8d-k8s-certs\") pod \"kube-controller-manager-ci-3975.2.1-0-09c0a9ae8e\" (UID: \"3fa6c7c4e79f78e84cc02bd80e2a5a8d\") " pod="kube-system/kube-controller-manager-ci-3975.2.1-0-09c0a9ae8e" Sep 4 20:28:35.199575 kubelet[2535]: I0904 20:28:35.199297 2535 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3fa6c7c4e79f78e84cc02bd80e2a5a8d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3975.2.1-0-09c0a9ae8e\" (UID: \"3fa6c7c4e79f78e84cc02bd80e2a5a8d\") " pod="kube-system/kube-controller-manager-ci-3975.2.1-0-09c0a9ae8e" Sep 4 20:28:35.199575 kubelet[2535]: I0904 20:28:35.199323 2535 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e98887c8461fd2b8b5feee5679a72c77-k8s-certs\") pod \"kube-apiserver-ci-3975.2.1-0-09c0a9ae8e\" (UID: \"e98887c8461fd2b8b5feee5679a72c77\") " pod="kube-system/kube-apiserver-ci-3975.2.1-0-09c0a9ae8e" Sep 4 20:28:35.199916 kubelet[2535]: I0904 20:28:35.199384 2535 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3fa6c7c4e79f78e84cc02bd80e2a5a8d-ca-certs\") pod \"kube-controller-manager-ci-3975.2.1-0-09c0a9ae8e\" (UID: \"3fa6c7c4e79f78e84cc02bd80e2a5a8d\") " pod="kube-system/kube-controller-manager-ci-3975.2.1-0-09c0a9ae8e" Sep 4 20:28:35.199916 kubelet[2535]: I0904 20:28:35.199414 2535 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3fa6c7c4e79f78e84cc02bd80e2a5a8d-flexvolume-dir\") pod \"kube-controller-manager-ci-3975.2.1-0-09c0a9ae8e\" (UID: \"3fa6c7c4e79f78e84cc02bd80e2a5a8d\") " pod="kube-system/kube-controller-manager-ci-3975.2.1-0-09c0a9ae8e" Sep 4 20:28:35.199916 kubelet[2535]: I0904 20:28:35.199434 2535 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3fa6c7c4e79f78e84cc02bd80e2a5a8d-kubeconfig\") pod \"kube-controller-manager-ci-3975.2.1-0-09c0a9ae8e\" (UID: \"3fa6c7c4e79f78e84cc02bd80e2a5a8d\") " pod="kube-system/kube-controller-manager-ci-3975.2.1-0-09c0a9ae8e" Sep 4 20:28:35.199916 kubelet[2535]: I0904 20:28:35.199461 2535 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d7dfdf6af20f66f7f0fc293b0c7dd122-kubeconfig\") pod \"kube-scheduler-ci-3975.2.1-0-09c0a9ae8e\" (UID: \"d7dfdf6af20f66f7f0fc293b0c7dd122\") " pod="kube-system/kube-scheduler-ci-3975.2.1-0-09c0a9ae8e" Sep 4 20:28:35.431997 kubelet[2535]: E0904 20:28:35.431155 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 4 20:28:35.431997 kubelet[2535]: E0904 20:28:35.431797 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 4 20:28:35.432589 kubelet[2535]: E0904 20:28:35.432564 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 4 20:28:35.973090 kubelet[2535]: I0904 20:28:35.972745 2535 apiserver.go:52] "Watching apiserver" Sep 4 20:28:35.998921 kubelet[2535]: I0904 20:28:35.998834 2535 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Sep 4 20:28:36.046863 kubelet[2535]: E0904 20:28:36.046251 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 4 20:28:36.048509 kubelet[2535]: E0904 20:28:36.048462 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 4 20:28:36.066583 kubelet[2535]: W0904 20:28:36.065961 2535 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 4 20:28:36.066583 kubelet[2535]: E0904 20:28:36.066043 2535 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3975.2.1-0-09c0a9ae8e\" already exists" pod="kube-system/kube-apiserver-ci-3975.2.1-0-09c0a9ae8e" Sep 4 20:28:36.066583 kubelet[2535]: E0904 20:28:36.066489 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 4 20:28:36.095212 kubelet[2535]: I0904 20:28:36.095143 2535 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3975.2.1-0-09c0a9ae8e" podStartSLOduration=1.095121306 podStartE2EDuration="1.095121306s" podCreationTimestamp="2024-09-04 20:28:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 20:28:36.086190293 +0000 UTC m=+1.209228973" watchObservedRunningTime="2024-09-04 20:28:36.095121306 +0000 UTC m=+1.218159979" Sep 4 20:28:36.113512 kubelet[2535]: I0904 20:28:36.113215 2535 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3975.2.1-0-09c0a9ae8e" podStartSLOduration=1.113195386 podStartE2EDuration="1.113195386s" podCreationTimestamp="2024-09-04 20:28:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 20:28:36.099715392 +0000 UTC m=+1.222754073" watchObservedRunningTime="2024-09-04 20:28:36.113195386 +0000 UTC m=+1.236234068" Sep 4 20:28:36.131118 kubelet[2535]: I0904 20:28:36.130986 2535 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3975.2.1-0-09c0a9ae8e" podStartSLOduration=1.130964018 podStartE2EDuration="1.130964018s" podCreationTimestamp="2024-09-04 20:28:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 20:28:36.114699258 +0000 UTC m=+1.237737937" watchObservedRunningTime="2024-09-04 20:28:36.130964018 +0000 UTC m=+1.254002699" Sep 4 20:28:37.048416 kubelet[2535]: E0904 20:28:37.047931 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 4 20:28:37.122007 kubelet[2535]: E0904 20:28:37.121595 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 4 20:28:38.049437 kubelet[2535]: E0904 20:28:38.049070 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 4 20:28:40.792406 systemd-resolved[1323]: Clock change detected. Flushing caches. Sep 4 20:28:40.792601 systemd-timesyncd[1343]: Contacted time server 23.150.40.242:123 (2.flatcar.pool.ntp.org). Sep 4 20:28:40.792668 systemd-timesyncd[1343]: Initial clock synchronization to Wed 2024-09-04 20:28:40.792303 UTC. Sep 4 20:28:41.491722 sudo[1651]: pam_unix(sudo:session): session closed for user root Sep 4 20:28:41.498542 sshd[1648]: pam_unix(sshd:session): session closed for user core Sep 4 20:28:41.503573 systemd-logind[1446]: Session 7 logged out. Waiting for processes to exit. Sep 4 20:28:41.503908 systemd[1]: sshd@6-64.23.130.28:22-139.178.68.195:56970.service: Deactivated successfully. Sep 4 20:28:41.506517 systemd[1]: session-7.scope: Deactivated successfully. Sep 4 20:28:41.506826 systemd[1]: session-7.scope: Consumed 5.560s CPU time, 138.2M memory peak, 0B memory swap peak. Sep 4 20:28:41.509306 systemd-logind[1446]: Removed session 7. Sep 4 20:28:46.031870 kubelet[2535]: E0904 20:28:46.030458 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 4 20:28:46.716856 kubelet[2535]: E0904 20:28:46.716500 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 4 20:28:47.779889 kubelet[2535]: E0904 20:28:47.779830 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 4 20:28:47.812613 kubelet[2535]: E0904 20:28:47.812567 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 4 20:28:49.310370 update_engine[1447]: I0904 20:28:49.309526 1447 update_attempter.cc:509] Updating boot flags... Sep 4 20:28:49.350557 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2623) Sep 4 20:28:49.416287 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2624) Sep 4 20:28:49.476956 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2624) Sep 4 20:28:50.433643 kubelet[2535]: I0904 20:28:50.433522 2535 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 4 20:28:50.435056 containerd[1468]: time="2024-09-04T20:28:50.434968959Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 4 20:28:50.436776 kubelet[2535]: I0904 20:28:50.435324 2535 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 4 20:28:51.216394 kubelet[2535]: I0904 20:28:51.216328 2535 topology_manager.go:215] "Topology Admit Handler" podUID="cfbae228-b9c0-4b99-88ff-2bd73bc7852e" podNamespace="kube-system" podName="kube-proxy-m496l" Sep 4 20:28:51.228675 systemd[1]: Created slice kubepods-besteffort-podcfbae228_b9c0_4b99_88ff_2bd73bc7852e.slice - libcontainer container kubepods-besteffort-podcfbae228_b9c0_4b99_88ff_2bd73bc7852e.slice. Sep 4 20:28:51.350901 kubelet[2535]: I0904 20:28:51.350502 2535 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cfbae228-b9c0-4b99-88ff-2bd73bc7852e-xtables-lock\") pod \"kube-proxy-m496l\" (UID: \"cfbae228-b9c0-4b99-88ff-2bd73bc7852e\") " pod="kube-system/kube-proxy-m496l" Sep 4 20:28:51.350901 kubelet[2535]: I0904 20:28:51.350556 2535 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8v9ms\" (UniqueName: \"kubernetes.io/projected/cfbae228-b9c0-4b99-88ff-2bd73bc7852e-kube-api-access-8v9ms\") pod \"kube-proxy-m496l\" (UID: \"cfbae228-b9c0-4b99-88ff-2bd73bc7852e\") " pod="kube-system/kube-proxy-m496l" Sep 4 20:28:51.350901 kubelet[2535]: I0904 20:28:51.350593 2535 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cfbae228-b9c0-4b99-88ff-2bd73bc7852e-kube-proxy\") pod \"kube-proxy-m496l\" (UID: \"cfbae228-b9c0-4b99-88ff-2bd73bc7852e\") " pod="kube-system/kube-proxy-m496l" Sep 4 20:28:51.350901 kubelet[2535]: I0904 20:28:51.350610 2535 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cfbae228-b9c0-4b99-88ff-2bd73bc7852e-lib-modules\") pod \"kube-proxy-m496l\" (UID: \"cfbae228-b9c0-4b99-88ff-2bd73bc7852e\") " pod="kube-system/kube-proxy-m496l" Sep 4 20:28:51.513123 kubelet[2535]: I0904 20:28:51.513078 2535 topology_manager.go:215] "Topology Admit Handler" podUID="522d9d1a-0b6e-425f-a0d6-97866f35c888" podNamespace="tigera-operator" podName="tigera-operator-77f994b5bb-k2vbt" Sep 4 20:28:51.523173 systemd[1]: Created slice kubepods-besteffort-pod522d9d1a_0b6e_425f_a0d6_97866f35c888.slice - libcontainer container kubepods-besteffort-pod522d9d1a_0b6e_425f_a0d6_97866f35c888.slice. Sep 4 20:28:51.536797 kubelet[2535]: E0904 20:28:51.536763 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 4 20:28:51.539293 containerd[1468]: time="2024-09-04T20:28:51.538061999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-m496l,Uid:cfbae228-b9c0-4b99-88ff-2bd73bc7852e,Namespace:kube-system,Attempt:0,}" Sep 4 20:28:51.551632 kubelet[2535]: I0904 20:28:51.551514 2535 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/522d9d1a-0b6e-425f-a0d6-97866f35c888-var-lib-calico\") pod \"tigera-operator-77f994b5bb-k2vbt\" (UID: \"522d9d1a-0b6e-425f-a0d6-97866f35c888\") " pod="tigera-operator/tigera-operator-77f994b5bb-k2vbt" Sep 4 20:28:51.551632 kubelet[2535]: I0904 20:28:51.551562 2535 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zs8qc\" (UniqueName: \"kubernetes.io/projected/522d9d1a-0b6e-425f-a0d6-97866f35c888-kube-api-access-zs8qc\") pod \"tigera-operator-77f994b5bb-k2vbt\" (UID: \"522d9d1a-0b6e-425f-a0d6-97866f35c888\") " pod="tigera-operator/tigera-operator-77f994b5bb-k2vbt" Sep 4 20:28:51.584427 containerd[1468]: time="2024-09-04T20:28:51.583938803Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 20:28:51.584427 containerd[1468]: time="2024-09-04T20:28:51.584026147Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 20:28:51.584427 containerd[1468]: time="2024-09-04T20:28:51.584047890Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 20:28:51.584427 containerd[1468]: time="2024-09-04T20:28:51.584059953Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 20:28:51.617508 systemd[1]: Started cri-containerd-71d4215e6ca3a6e243896a08714f95b6ed8b070aa209e8b184611f8b89f73d62.scope - libcontainer container 71d4215e6ca3a6e243896a08714f95b6ed8b070aa209e8b184611f8b89f73d62. Sep 4 20:28:51.668438 containerd[1468]: time="2024-09-04T20:28:51.667672677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-m496l,Uid:cfbae228-b9c0-4b99-88ff-2bd73bc7852e,Namespace:kube-system,Attempt:0,} returns sandbox id \"71d4215e6ca3a6e243896a08714f95b6ed8b070aa209e8b184611f8b89f73d62\"" Sep 4 20:28:51.669461 kubelet[2535]: E0904 20:28:51.669430 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 4 20:28:51.682827 containerd[1468]: time="2024-09-04T20:28:51.682761262Z" level=info msg="CreateContainer within sandbox \"71d4215e6ca3a6e243896a08714f95b6ed8b070aa209e8b184611f8b89f73d62\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 4 20:28:51.703145 containerd[1468]: time="2024-09-04T20:28:51.703074225Z" level=info msg="CreateContainer within sandbox \"71d4215e6ca3a6e243896a08714f95b6ed8b070aa209e8b184611f8b89f73d62\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"923854f6e7d85970437342e16c3ef8b62ba56e7429ce26b7f383aca478e1b09b\"" Sep 4 20:28:51.705368 containerd[1468]: time="2024-09-04T20:28:51.704069330Z" level=info msg="StartContainer for \"923854f6e7d85970437342e16c3ef8b62ba56e7429ce26b7f383aca478e1b09b\"" Sep 4 20:28:51.750502 systemd[1]: Started cri-containerd-923854f6e7d85970437342e16c3ef8b62ba56e7429ce26b7f383aca478e1b09b.scope - libcontainer container 923854f6e7d85970437342e16c3ef8b62ba56e7429ce26b7f383aca478e1b09b. Sep 4 20:28:51.793241 containerd[1468]: time="2024-09-04T20:28:51.792182786Z" level=info msg="StartContainer for \"923854f6e7d85970437342e16c3ef8b62ba56e7429ce26b7f383aca478e1b09b\" returns successfully" Sep 4 20:28:51.829605 containerd[1468]: time="2024-09-04T20:28:51.829098878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-77f994b5bb-k2vbt,Uid:522d9d1a-0b6e-425f-a0d6-97866f35c888,Namespace:tigera-operator,Attempt:0,}" Sep 4 20:28:51.865652 containerd[1468]: time="2024-09-04T20:28:51.865494332Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 20:28:51.866005 containerd[1468]: time="2024-09-04T20:28:51.865608965Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 20:28:51.866005 containerd[1468]: time="2024-09-04T20:28:51.865666948Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 20:28:51.866005 containerd[1468]: time="2024-09-04T20:28:51.865714273Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 20:28:51.915514 systemd[1]: Started cri-containerd-94c984393988a9094fc1434c2052dbc2d87d7367dab1fd08dd21d739bae58c9e.scope - libcontainer container 94c984393988a9094fc1434c2052dbc2d87d7367dab1fd08dd21d739bae58c9e. Sep 4 20:28:51.982633 containerd[1468]: time="2024-09-04T20:28:51.982144574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-77f994b5bb-k2vbt,Uid:522d9d1a-0b6e-425f-a0d6-97866f35c888,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"94c984393988a9094fc1434c2052dbc2d87d7367dab1fd08dd21d739bae58c9e\"" Sep 4 20:28:51.987062 containerd[1468]: time="2024-09-04T20:28:51.987011996Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\"" Sep 4 20:28:52.489947 systemd[1]: run-containerd-runc-k8s.io-71d4215e6ca3a6e243896a08714f95b6ed8b070aa209e8b184611f8b89f73d62-runc.w7cm7y.mount: Deactivated successfully. Sep 4 20:28:52.740790 kubelet[2535]: E0904 20:28:52.740667 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 4 20:28:52.754111 kubelet[2535]: I0904 20:28:52.754020 2535 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-m496l" podStartSLOduration=1.7539938510000002 podStartE2EDuration="1.753993851s" podCreationTimestamp="2024-09-04 20:28:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 20:28:52.753747415 +0000 UTC m=+17.224316225" watchObservedRunningTime="2024-09-04 20:28:52.753993851 +0000 UTC m=+17.224562662" Sep 4 20:28:53.303222 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount574360829.mount: Deactivated successfully. Sep 4 20:28:53.743169 kubelet[2535]: E0904 20:28:53.742595 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 4 20:28:53.920312 containerd[1468]: time="2024-09-04T20:28:53.918904892Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:28:53.920312 containerd[1468]: time="2024-09-04T20:28:53.919048403Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.3: active requests=0, bytes read=22136541" Sep 4 20:28:53.920312 containerd[1468]: time="2024-09-04T20:28:53.920202719Z" level=info msg="ImageCreate event name:\"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:28:53.922621 containerd[1468]: time="2024-09-04T20:28:53.922574362Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:28:53.923897 containerd[1468]: time="2024-09-04T20:28:53.923851936Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.3\" with image id \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\", repo tag \"quay.io/tigera/operator:v1.34.3\", repo digest \"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\", size \"22130728\" in 1.936788297s" Sep 4 20:28:53.924078 containerd[1468]: time="2024-09-04T20:28:53.924052175Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\" returns image reference \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\"" Sep 4 20:28:53.941187 containerd[1468]: time="2024-09-04T20:28:53.941131141Z" level=info msg="CreateContainer within sandbox \"94c984393988a9094fc1434c2052dbc2d87d7367dab1fd08dd21d739bae58c9e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 4 20:28:53.956663 containerd[1468]: time="2024-09-04T20:28:53.956572973Z" level=info msg="CreateContainer within sandbox \"94c984393988a9094fc1434c2052dbc2d87d7367dab1fd08dd21d739bae58c9e\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"cb0f60a2e3545e1872156569433ed084a8971f983bb390435fa58c5032313873\"" Sep 4 20:28:53.959349 containerd[1468]: time="2024-09-04T20:28:53.957508742Z" level=info msg="StartContainer for \"cb0f60a2e3545e1872156569433ed084a8971f983bb390435fa58c5032313873\"" Sep 4 20:28:54.015614 systemd[1]: Started cri-containerd-cb0f60a2e3545e1872156569433ed084a8971f983bb390435fa58c5032313873.scope - libcontainer container cb0f60a2e3545e1872156569433ed084a8971f983bb390435fa58c5032313873. Sep 4 20:28:54.054351 containerd[1468]: time="2024-09-04T20:28:54.054216061Z" level=info msg="StartContainer for \"cb0f60a2e3545e1872156569433ed084a8971f983bb390435fa58c5032313873\" returns successfully" Sep 4 20:28:54.806326 kubelet[2535]: I0904 20:28:54.806255 2535 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-77f994b5bb-k2vbt" podStartSLOduration=1.8604296150000001 podStartE2EDuration="3.806234143s" podCreationTimestamp="2024-09-04 20:28:51 +0000 UTC" firstStartedPulling="2024-09-04 20:28:51.986360949 +0000 UTC m=+16.456929737" lastFinishedPulling="2024-09-04 20:28:53.932165466 +0000 UTC m=+18.402734265" observedRunningTime="2024-09-04 20:28:54.805247122 +0000 UTC m=+19.275815982" watchObservedRunningTime="2024-09-04 20:28:54.806234143 +0000 UTC m=+19.276802942" Sep 4 20:28:57.167887 kubelet[2535]: I0904 20:28:57.167813 2535 topology_manager.go:215] "Topology Admit Handler" podUID="47913963-0021-43f4-a678-6cacd755f794" podNamespace="calico-system" podName="calico-typha-55bf7c7f47-nx79w" Sep 4 20:28:57.179940 systemd[1]: Created slice kubepods-besteffort-pod47913963_0021_43f4_a678_6cacd755f794.slice - libcontainer container kubepods-besteffort-pod47913963_0021_43f4_a678_6cacd755f794.slice. Sep 4 20:28:57.192572 kubelet[2535]: I0904 20:28:57.191650 2535 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47913963-0021-43f4-a678-6cacd755f794-tigera-ca-bundle\") pod \"calico-typha-55bf7c7f47-nx79w\" (UID: \"47913963-0021-43f4-a678-6cacd755f794\") " pod="calico-system/calico-typha-55bf7c7f47-nx79w" Sep 4 20:28:57.192572 kubelet[2535]: I0904 20:28:57.191722 2535 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/47913963-0021-43f4-a678-6cacd755f794-typha-certs\") pod \"calico-typha-55bf7c7f47-nx79w\" (UID: \"47913963-0021-43f4-a678-6cacd755f794\") " pod="calico-system/calico-typha-55bf7c7f47-nx79w" Sep 4 20:28:57.192572 kubelet[2535]: I0904 20:28:57.191754 2535 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67f22\" (UniqueName: \"kubernetes.io/projected/47913963-0021-43f4-a678-6cacd755f794-kube-api-access-67f22\") pod \"calico-typha-55bf7c7f47-nx79w\" (UID: \"47913963-0021-43f4-a678-6cacd755f794\") " pod="calico-system/calico-typha-55bf7c7f47-nx79w" Sep 4 20:28:57.343187 kubelet[2535]: I0904 20:28:57.343112 2535 topology_manager.go:215] "Topology Admit Handler" podUID="000e2ec9-7c58-4c39-a0c6-dc41ecb69b1a" podNamespace="calico-system" podName="calico-node-q88qq" Sep 4 20:28:57.355582 systemd[1]: Created slice kubepods-besteffort-pod000e2ec9_7c58_4c39_a0c6_dc41ecb69b1a.slice - libcontainer container kubepods-besteffort-pod000e2ec9_7c58_4c39_a0c6_dc41ecb69b1a.slice. Sep 4 20:28:57.393129 kubelet[2535]: I0904 20:28:57.393051 2535 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l57zr\" (UniqueName: \"kubernetes.io/projected/000e2ec9-7c58-4c39-a0c6-dc41ecb69b1a-kube-api-access-l57zr\") pod \"calico-node-q88qq\" (UID: \"000e2ec9-7c58-4c39-a0c6-dc41ecb69b1a\") " pod="calico-system/calico-node-q88qq" Sep 4 20:28:57.393129 kubelet[2535]: I0904 20:28:57.393116 2535 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/000e2ec9-7c58-4c39-a0c6-dc41ecb69b1a-cni-log-dir\") pod \"calico-node-q88qq\" (UID: \"000e2ec9-7c58-4c39-a0c6-dc41ecb69b1a\") " pod="calico-system/calico-node-q88qq" Sep 4 20:28:57.393129 kubelet[2535]: I0904 20:28:57.393136 2535 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/000e2ec9-7c58-4c39-a0c6-dc41ecb69b1a-tigera-ca-bundle\") pod \"calico-node-q88qq\" (UID: \"000e2ec9-7c58-4c39-a0c6-dc41ecb69b1a\") " pod="calico-system/calico-node-q88qq" Sep 4 20:28:57.393129 kubelet[2535]: I0904 20:28:57.393155 2535 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/000e2ec9-7c58-4c39-a0c6-dc41ecb69b1a-node-certs\") pod \"calico-node-q88qq\" (UID: \"000e2ec9-7c58-4c39-a0c6-dc41ecb69b1a\") " pod="calico-system/calico-node-q88qq" Sep 4 20:28:57.393525 kubelet[2535]: I0904 20:28:57.393172 2535 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/000e2ec9-7c58-4c39-a0c6-dc41ecb69b1a-cni-bin-dir\") pod \"calico-node-q88qq\" (UID: \"000e2ec9-7c58-4c39-a0c6-dc41ecb69b1a\") " pod="calico-system/calico-node-q88qq" Sep 4 20:28:57.393525 kubelet[2535]: I0904 20:28:57.393191 2535 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/000e2ec9-7c58-4c39-a0c6-dc41ecb69b1a-policysync\") pod \"calico-node-q88qq\" (UID: \"000e2ec9-7c58-4c39-a0c6-dc41ecb69b1a\") " pod="calico-system/calico-node-q88qq" Sep 4 20:28:57.393525 kubelet[2535]: I0904 20:28:57.393206 2535 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/000e2ec9-7c58-4c39-a0c6-dc41ecb69b1a-var-lib-calico\") pod \"calico-node-q88qq\" (UID: \"000e2ec9-7c58-4c39-a0c6-dc41ecb69b1a\") " pod="calico-system/calico-node-q88qq" Sep 4 20:28:57.393525 kubelet[2535]: I0904 20:28:57.393229 2535 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/000e2ec9-7c58-4c39-a0c6-dc41ecb69b1a-cni-net-dir\") pod \"calico-node-q88qq\" (UID: \"000e2ec9-7c58-4c39-a0c6-dc41ecb69b1a\") " pod="calico-system/calico-node-q88qq" Sep 4 20:28:57.393525 kubelet[2535]: I0904 20:28:57.393245 2535 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/000e2ec9-7c58-4c39-a0c6-dc41ecb69b1a-lib-modules\") pod \"calico-node-q88qq\" (UID: \"000e2ec9-7c58-4c39-a0c6-dc41ecb69b1a\") " pod="calico-system/calico-node-q88qq" Sep 4 20:28:57.393745 kubelet[2535]: I0904 20:28:57.393258 2535 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/000e2ec9-7c58-4c39-a0c6-dc41ecb69b1a-xtables-lock\") pod \"calico-node-q88qq\" (UID: \"000e2ec9-7c58-4c39-a0c6-dc41ecb69b1a\") " pod="calico-system/calico-node-q88qq" Sep 4 20:28:57.393745 kubelet[2535]: I0904 20:28:57.393274 2535 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/000e2ec9-7c58-4c39-a0c6-dc41ecb69b1a-var-run-calico\") pod \"calico-node-q88qq\" (UID: \"000e2ec9-7c58-4c39-a0c6-dc41ecb69b1a\") " pod="calico-system/calico-node-q88qq" Sep 4 20:28:57.393745 kubelet[2535]: I0904 20:28:57.393304 2535 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/000e2ec9-7c58-4c39-a0c6-dc41ecb69b1a-flexvol-driver-host\") pod \"calico-node-q88qq\" (UID: \"000e2ec9-7c58-4c39-a0c6-dc41ecb69b1a\") " pod="calico-system/calico-node-q88qq" Sep 4 20:28:57.483974 kubelet[2535]: E0904 20:28:57.483810 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 4 20:28:57.486119 containerd[1468]: time="2024-09-04T20:28:57.485533215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-55bf7c7f47-nx79w,Uid:47913963-0021-43f4-a678-6cacd755f794,Namespace:calico-system,Attempt:0,}" Sep 4 20:28:57.505451 kubelet[2535]: E0904 20:28:57.504065 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:28:57.505451 kubelet[2535]: W0904 20:28:57.504111 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:28:57.505451 kubelet[2535]: E0904 20:28:57.504167 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:28:57.505451 kubelet[2535]: E0904 20:28:57.505237 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:28:57.505451 kubelet[2535]: W0904 20:28:57.505272 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:28:57.507462 kubelet[2535]: E0904 20:28:57.505851 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:28:57.508174 kubelet[2535]: E0904 20:28:57.508096 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:28:57.508174 kubelet[2535]: W0904 20:28:57.508128 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:28:57.509748 kubelet[2535]: E0904 20:28:57.509526 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:28:57.511829 kubelet[2535]: E0904 20:28:57.511614 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:28:57.511829 kubelet[2535]: W0904 20:28:57.511649 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:28:57.512302 kubelet[2535]: E0904 20:28:57.512061 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:28:57.512511 kubelet[2535]: E0904 20:28:57.512491 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:28:57.512699 kubelet[2535]: W0904 20:28:57.512597 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:28:57.512699 kubelet[2535]: E0904 20:28:57.512670 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:28:57.513664 kubelet[2535]: E0904 20:28:57.513348 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:28:57.513664 kubelet[2535]: W0904 20:28:57.513377 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:28:57.513664 kubelet[2535]: E0904 20:28:57.513521 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:28:57.515331 kubelet[2535]: E0904 20:28:57.515080 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:28:57.515331 kubelet[2535]: W0904 20:28:57.515111 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:28:57.515331 kubelet[2535]: E0904 20:28:57.515172 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:28:57.516427 kubelet[2535]: E0904 20:28:57.516400 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:28:57.519279 kubelet[2535]: W0904 20:28:57.516557 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:28:57.519670 kubelet[2535]: E0904 20:28:57.519498 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:28:57.520251 kubelet[2535]: E0904 20:28:57.520040 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:28:57.520251 kubelet[2535]: W0904 20:28:57.520067 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:28:57.520251 kubelet[2535]: E0904 20:28:57.520143 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:28:57.520685 kubelet[2535]: E0904 20:28:57.520580 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:28:57.520685 kubelet[2535]: W0904 20:28:57.520604 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:28:57.520803 kubelet[2535]: E0904 20:28:57.520733 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:28:57.521244 kubelet[2535]: E0904 20:28:57.521133 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:28:57.521244 kubelet[2535]: W0904 20:28:57.521152 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:28:57.521383 kubelet[2535]: E0904 20:28:57.521304 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:28:57.521888 kubelet[2535]: E0904 20:28:57.521736 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:28:57.521888 kubelet[2535]: W0904 20:28:57.521767 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:28:57.522004 kubelet[2535]: E0904 20:28:57.521900 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:28:57.522466 kubelet[2535]: E0904 20:28:57.522433 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:28:57.523567 kubelet[2535]: W0904 20:28:57.523415 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:28:57.523883 kubelet[2535]: E0904 20:28:57.523865 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:28:57.524160 kubelet[2535]: W0904 20:28:57.523980 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:28:57.525050 kubelet[2535]: E0904 20:28:57.524751 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:28:57.525050 kubelet[2535]: W0904 20:28:57.524769 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:28:57.525050 kubelet[2535]: I0904 20:28:57.524825 2535 topology_manager.go:215] "Topology Admit Handler" podUID="c943a141-800d-4fd7-b526-a302d60b317a" podNamespace="calico-system" podName="csi-node-driver-bsn66" Sep 4 20:28:57.525214 kubelet[2535]: E0904 20:28:57.525111 2535 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bsn66" podUID="c943a141-800d-4fd7-b526-a302d60b317a" Sep 4 20:28:57.525710 kubelet[2535]: E0904 20:28:57.525421 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:28:57.525710 kubelet[2535]: E0904 20:28:57.525474 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:28:57.525710 kubelet[2535]: E0904 20:28:57.525489 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:28:57.526490 kubelet[2535]: E0904 20:28:57.525893 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:28:57.526490 kubelet[2535]: W0904 20:28:57.525917 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:28:57.527550 kubelet[2535]: E0904 20:28:57.527005 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:28:57.528036 kubelet[2535]: E0904 20:28:57.527848 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:28:57.528036 kubelet[2535]: W0904 20:28:57.527871 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:28:57.528501 kubelet[2535]: E0904 20:28:57.528278 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:28:57.529189 kubelet[2535]: E0904 20:28:57.528981 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:28:57.529189 kubelet[2535]: W0904 20:28:57.529003 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:28:57.531967 kubelet[2535]: E0904 20:28:57.531026 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:28:57.533530 kubelet[2535]: E0904 20:28:57.533463 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:28:57.537836 kubelet[2535]: W0904 20:28:57.537619 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:28:57.538059 kubelet[2535]: E0904 20:28:57.538034 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:28:57.539287 kubelet[2535]: E0904 20:28:57.538258 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:28:57.539287 kubelet[2535]: W0904 20:28:57.539021 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:28:57.543684 kubelet[2535]: E0904 20:28:57.542123 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:28:57.543684 kubelet[2535]: W0904 20:28:57.542150 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:28:57.543684 kubelet[2535]: E0904 20:28:57.542965 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:28:57.543684 kubelet[2535]: W0904 20:28:57.543255 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:28:57.547346 kubelet[2535]: E0904 20:28:57.545482 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:28:57.547346 kubelet[2535]: W0904 20:28:57.545509 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:28:57.547346 kubelet[2535]: E0904 20:28:57.546388 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:28:57.548561 kubelet[2535]: W0904 20:28:57.548322 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:28:57.551258 kubelet[2535]: E0904 20:28:57.549641 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:28:57.551258 kubelet[2535]: W0904 20:28:57.549901 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:28:57.551878 kubelet[2535]: E0904 20:28:57.551754 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:28:57.555927 kubelet[2535]: E0904 20:28:57.553951 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:28:57.555927 kubelet[2535]: W0904 20:28:57.553987 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:28:57.556738 kubelet[2535]: E0904 20:28:57.556697 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:28:57.556936 kubelet[2535]: W0904 20:28:57.556910 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:28:57.557137 kubelet[2535]: E0904 20:28:57.557035 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:28:57.566980 containerd[1468]: time="2024-09-04T20:28:57.564181113Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 20:28:57.569263 kubelet[2535]: E0904 20:28:57.567895 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:28:57.569485 containerd[1468]: time="2024-09-04T20:28:57.566138692Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 20:28:57.569485 containerd[1468]: time="2024-09-04T20:28:57.568886838Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 20:28:57.569485 containerd[1468]: time="2024-09-04T20:28:57.568954047Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 20:28:57.571022 kubelet[2535]: E0904 20:28:57.569719 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:28:57.571611 kubelet[2535]: E0904 20:28:57.571571 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:28:57.571611 kubelet[2535]: W0904 20:28:57.571599 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:28:57.571774 kubelet[2535]: E0904 20:28:57.571625 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:28:57.571774 kubelet[2535]: E0904 20:28:57.571664 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:28:57.571774 kubelet[2535]: E0904 20:28:57.571676 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:28:57.571774 kubelet[2535]: E0904 20:28:57.571687 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:28:57.571774 kubelet[2535]: E0904 20:28:57.571698 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:28:57.580019 kubelet[2535]: E0904 20:28:57.578405 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:28:57.580314 kubelet[2535]: W0904 20:28:57.580273 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:28:57.580947 kubelet[2535]: E0904 20:28:57.580910 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:28:57.589289 kubelet[2535]: E0904 20:28:57.587933 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:28:57.589289 kubelet[2535]: W0904 20:28:57.587968 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:28:57.589289 kubelet[2535]: E0904 20:28:57.588008 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:28:57.610397 kubelet[2535]: E0904 20:28:57.608144 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:28:57.610397 kubelet[2535]: W0904 20:28:57.608179 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:28:57.610397 kubelet[2535]: E0904 20:28:57.608209 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:28:57.615494 kubelet[2535]: E0904 20:28:57.613294 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:28:57.615494 kubelet[2535]: W0904 20:28:57.613322 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:28:57.615494 kubelet[2535]: E0904 20:28:57.613351 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:28:57.615494 kubelet[2535]: E0904 20:28:57.613806 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:28:57.615494 kubelet[2535]: W0904 20:28:57.613819 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:28:57.615494 kubelet[2535]: E0904 20:28:57.613835 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:28:57.615494 kubelet[2535]: E0904 20:28:57.614395 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:28:57.615494 kubelet[2535]: W0904 20:28:57.614409 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:28:57.615494 kubelet[2535]: E0904 20:28:57.614422 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:28:57.615494 kubelet[2535]: E0904 20:28:57.614816 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:28:57.616020 kubelet[2535]: W0904 20:28:57.614827 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:28:57.616020 kubelet[2535]: E0904 20:28:57.614839 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:28:57.616020 kubelet[2535]: E0904 20:28:57.615382 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:28:57.616020 kubelet[2535]: W0904 20:28:57.615800 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:28:57.616020 kubelet[2535]: E0904 20:28:57.615954 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:28:57.620842 kubelet[2535]: E0904 20:28:57.616857 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:28:57.620842 kubelet[2535]: W0904 20:28:57.616876 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:28:57.620842 kubelet[2535]: E0904 20:28:57.616889 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:28:57.620842 kubelet[2535]: E0904 20:28:57.617371 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:28:57.620842 kubelet[2535]: W0904 20:28:57.617383 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:28:57.620842 kubelet[2535]: E0904 20:28:57.617402 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:28:57.620842 kubelet[2535]: E0904 20:28:57.617582 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:28:57.620842 kubelet[2535]: W0904 20:28:57.617589 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:28:57.620842 kubelet[2535]: E0904 20:28:57.617598 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:28:57.620842 kubelet[2535]: E0904 20:28:57.617805 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:28:57.621627 kubelet[2535]: W0904 20:28:57.617815 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:28:57.621627 kubelet[2535]: E0904 20:28:57.617831 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:28:57.621627 kubelet[2535]: E0904 20:28:57.618028 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:28:57.621627 kubelet[2535]: W0904 20:28:57.618036 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:28:57.621627 kubelet[2535]: E0904 20:28:57.618056 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:28:57.621627 kubelet[2535]: E0904 20:28:57.618215 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:28:57.621627 kubelet[2535]: W0904 20:28:57.618293 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:28:57.621627 kubelet[2535]: E0904 20:28:57.618303 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:28:57.621627 kubelet[2535]: E0904 20:28:57.618490 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:28:57.621627 kubelet[2535]: W0904 20:28:57.618499 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:28:57.622054 kubelet[2535]: E0904 20:28:57.618508 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:28:57.622054 kubelet[2535]: E0904 20:28:57.618665 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:28:57.622054 kubelet[2535]: W0904 20:28:57.618673 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:28:57.622054 kubelet[2535]: E0904 20:28:57.618681 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:28:57.622054 kubelet[2535]: E0904 20:28:57.618864 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:28:57.622054 kubelet[2535]: W0904 20:28:57.618871 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:28:57.622054 kubelet[2535]: E0904 20:28:57.618879 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:28:57.622054 kubelet[2535]: E0904 20:28:57.619028 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:28:57.622054 kubelet[2535]: W0904 20:28:57.619035 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:28:57.622054 kubelet[2535]: E0904 20:28:57.619046 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:28:57.622499 kubelet[2535]: E0904 20:28:57.619299 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:28:57.622499 kubelet[2535]: W0904 20:28:57.619314 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:28:57.622499 kubelet[2535]: E0904 20:28:57.619332 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:28:57.622499 kubelet[2535]: E0904 20:28:57.619669 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:28:57.622499 kubelet[2535]: W0904 20:28:57.619679 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:28:57.622499 kubelet[2535]: E0904 20:28:57.619691 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:28:57.622499 kubelet[2535]: E0904 20:28:57.619921 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:28:57.622499 kubelet[2535]: W0904 20:28:57.619929 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:28:57.622499 kubelet[2535]: E0904 20:28:57.619957 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:28:57.622499 kubelet[2535]: E0904 20:28:57.620121 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:28:57.622888 kubelet[2535]: W0904 20:28:57.620128 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:28:57.622888 kubelet[2535]: E0904 20:28:57.620137 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:28:57.622888 kubelet[2535]: E0904 20:28:57.620336 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:28:57.622888 kubelet[2535]: W0904 20:28:57.620344 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:28:57.622888 kubelet[2535]: E0904 20:28:57.620353 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:28:57.622888 kubelet[2535]: E0904 20:28:57.620570 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:28:57.622888 kubelet[2535]: W0904 20:28:57.620578 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:28:57.622888 kubelet[2535]: E0904 20:28:57.620587 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:28:57.622888 kubelet[2535]: E0904 20:28:57.620950 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:28:57.622888 kubelet[2535]: W0904 20:28:57.620961 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:28:57.628994 kubelet[2535]: E0904 20:28:57.620971 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:28:57.628994 kubelet[2535]: E0904 20:28:57.621185 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:28:57.628994 kubelet[2535]: W0904 20:28:57.621193 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:28:57.628994 kubelet[2535]: E0904 20:28:57.621203 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:28:57.628994 kubelet[2535]: I0904 20:28:57.621380 2535 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c943a141-800d-4fd7-b526-a302d60b317a-socket-dir\") pod \"csi-node-driver-bsn66\" (UID: \"c943a141-800d-4fd7-b526-a302d60b317a\") " pod="calico-system/csi-node-driver-bsn66" Sep 4 20:28:57.628994 kubelet[2535]: E0904 20:28:57.621757 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:28:57.628994 kubelet[2535]: W0904 20:28:57.621777 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:28:57.628994 kubelet[2535]: E0904 20:28:57.621836 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:28:57.627516 systemd[1]: Started cri-containerd-5f8ef57d68b267d83fcb3ae4ac6991c48baa0df82e4499ab5ee36e813c00e7fd.scope - libcontainer container 5f8ef57d68b267d83fcb3ae4ac6991c48baa0df82e4499ab5ee36e813c00e7fd. Sep 4 20:28:57.629526 kubelet[2535]: I0904 20:28:57.621858 2535 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c943a141-800d-4fd7-b526-a302d60b317a-registration-dir\") pod \"csi-node-driver-bsn66\" (UID: \"c943a141-800d-4fd7-b526-a302d60b317a\") " pod="calico-system/csi-node-driver-bsn66" Sep 4 20:28:57.629526 kubelet[2535]: E0904 20:28:57.622078 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:28:57.629526 kubelet[2535]: W0904 20:28:57.622103 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:28:57.629526 kubelet[2535]: E0904 20:28:57.622153 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:28:57.629526 kubelet[2535]: I0904 20:28:57.622173 2535 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c943a141-800d-4fd7-b526-a302d60b317a-kubelet-dir\") pod \"csi-node-driver-bsn66\" (UID: \"c943a141-800d-4fd7-b526-a302d60b317a\") " pod="calico-system/csi-node-driver-bsn66" Sep 4 20:28:57.629526 kubelet[2535]: E0904 20:28:57.624885 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:28:57.629526 kubelet[2535]: W0904 20:28:57.624907 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:28:57.629526 kubelet[2535]: E0904 20:28:57.624928 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:28:57.629729 kubelet[2535]: I0904 20:28:57.624957 2535 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/c943a141-800d-4fd7-b526-a302d60b317a-varrun\") pod \"csi-node-driver-bsn66\" (UID: \"c943a141-800d-4fd7-b526-a302d60b317a\") " pod="calico-system/csi-node-driver-bsn66" Sep 4 20:28:57.629729 kubelet[2535]: E0904 20:28:57.628094 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:28:57.629729 kubelet[2535]: W0904 20:28:57.628120 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:28:57.629729 kubelet[2535]: E0904 20:28:57.628145 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:28:57.629729 kubelet[2535]: I0904 20:28:57.628178 2535 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztc2t\" (UniqueName: \"kubernetes.io/projected/c943a141-800d-4fd7-b526-a302d60b317a-kube-api-access-ztc2t\") pod \"csi-node-driver-bsn66\" (UID: \"c943a141-800d-4fd7-b526-a302d60b317a\") " pod="calico-system/csi-node-driver-bsn66" Sep 4 20:28:57.639532 kubelet[2535]: E0904 20:28:57.639488 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:28:57.639532 kubelet[2535]: W0904 20:28:57.639517 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:28:57.639532 kubelet[2535]: E0904 20:28:57.639540 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:28:57.640942 kubelet[2535]: E0904 20:28:57.640811 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:28:57.640942 kubelet[2535]: W0904 20:28:57.640845 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:28:57.640942 kubelet[2535]: E0904 20:28:57.640866 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:28:57.642217 kubelet[2535]: E0904 20:28:57.642079 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:28:57.642217 kubelet[2535]: W0904 20:28:57.642096 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:28:57.642217 kubelet[2535]: E0904 20:28:57.642114 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:28:57.644010 kubelet[2535]: E0904 20:28:57.643957 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:28:57.644010 kubelet[2535]: W0904 20:28:57.643985 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:28:57.644010 kubelet[2535]: E0904 20:28:57.644005 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:28:57.644430 kubelet[2535]: E0904 20:28:57.644327 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:28:57.644430 kubelet[2535]: W0904 20:28:57.644336 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:28:57.644430 kubelet[2535]: E0904 20:28:57.644349 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:28:57.646712 kubelet[2535]: E0904 20:28:57.645848 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:28:57.646712 kubelet[2535]: W0904 20:28:57.645953 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:28:57.646712 kubelet[2535]: E0904 20:28:57.645976 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:28:57.646958 kubelet[2535]: E0904 20:28:57.646798 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:28:57.646958 kubelet[2535]: W0904 20:28:57.646809 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:28:57.647039 kubelet[2535]: E0904 20:28:57.646986 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:28:57.649496 kubelet[2535]: E0904 20:28:57.648362 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:28:57.649496 kubelet[2535]: W0904 20:28:57.648379 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:28:57.649496 kubelet[2535]: E0904 20:28:57.648418 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:28:57.649496 kubelet[2535]: E0904 20:28:57.648687 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:28:57.649496 kubelet[2535]: W0904 20:28:57.648696 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:28:57.649496 kubelet[2535]: E0904 20:28:57.648708 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:28:57.649496 kubelet[2535]: E0904 20:28:57.648885 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:28:57.649496 kubelet[2535]: W0904 20:28:57.648893 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:28:57.649496 kubelet[2535]: E0904 20:28:57.648902 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:28:57.649496 kubelet[2535]: E0904 20:28:57.649082 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:28:57.650020 kubelet[2535]: W0904 20:28:57.649089 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:28:57.650020 kubelet[2535]: E0904 20:28:57.649097 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:28:57.674057 kubelet[2535]: E0904 20:28:57.674002 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 4 20:28:57.678999 containerd[1468]: time="2024-09-04T20:28:57.678941828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-q88qq,Uid:000e2ec9-7c58-4c39-a0c6-dc41ecb69b1a,Namespace:calico-system,Attempt:0,}" Sep 4 20:28:57.730341 kubelet[2535]: E0904 20:28:57.729559 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:28:57.730341 kubelet[2535]: W0904 20:28:57.729914 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:28:57.730341 kubelet[2535]: E0904 20:28:57.729954 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:28:57.732832 kubelet[2535]: E0904 20:28:57.731910 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:28:57.732832 kubelet[2535]: W0904 20:28:57.731932 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:28:57.732832 kubelet[2535]: E0904 20:28:57.731973 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:28:57.732832 kubelet[2535]: E0904 20:28:57.732541 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:28:57.732832 kubelet[2535]: W0904 20:28:57.732554 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:28:57.732832 kubelet[2535]: E0904 20:28:57.732754 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:28:57.734733 kubelet[2535]: E0904 20:28:57.733801 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:28:57.734733 kubelet[2535]: W0904 20:28:57.733817 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:28:57.734733 kubelet[2535]: E0904 20:28:57.733847 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:28:57.734733 kubelet[2535]: E0904 20:28:57.734395 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:28:57.734733 kubelet[2535]: W0904 20:28:57.734408 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:28:57.734733 kubelet[2535]: E0904 20:28:57.734424 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:28:57.737169 kubelet[2535]: E0904 20:28:57.736733 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:28:57.737169 kubelet[2535]: W0904 20:28:57.736846 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:28:57.737169 kubelet[2535]: E0904 20:28:57.736878 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:28:57.738506 kubelet[2535]: E0904 20:28:57.737836 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:28:57.738506 kubelet[2535]: W0904 20:28:57.737860 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:28:57.738506 kubelet[2535]: E0904 20:28:57.737885 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:28:57.738506 kubelet[2535]: E0904 20:28:57.738159 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:28:57.738506 kubelet[2535]: W0904 20:28:57.738169 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:28:57.738506 kubelet[2535]: E0904 20:28:57.738426 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:28:57.743138 kubelet[2535]: E0904 20:28:57.742570 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:28:57.743138 kubelet[2535]: W0904 20:28:57.742599 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:28:57.743410 kubelet[2535]: E0904 20:28:57.743292 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:28:57.746638 kubelet[2535]: E0904 20:28:57.743530 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:28:57.746638 kubelet[2535]: W0904 20:28:57.743547 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:28:57.746638 kubelet[2535]: E0904 20:28:57.743630 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:28:57.746638 kubelet[2535]: E0904 20:28:57.743883 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:28:57.746638 kubelet[2535]: W0904 20:28:57.743893 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:28:57.746638 kubelet[2535]: E0904 20:28:57.743987 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:28:57.746638 kubelet[2535]: E0904 20:28:57.744112 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:28:57.746638 kubelet[2535]: W0904 20:28:57.744119 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:28:57.746638 kubelet[2535]: E0904 20:28:57.744158 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:28:57.746638 kubelet[2535]: E0904 20:28:57.745429 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:28:57.748997 kubelet[2535]: W0904 20:28:57.745443 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:28:57.748997 kubelet[2535]: E0904 20:28:57.745620 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:28:57.748997 kubelet[2535]: E0904 20:28:57.746410 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:28:57.748997 kubelet[2535]: W0904 20:28:57.746423 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:28:57.748997 kubelet[2535]: E0904 20:28:57.746436 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:28:57.748997 kubelet[2535]: E0904 20:28:57.747134 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:28:57.748997 kubelet[2535]: W0904 20:28:57.747145 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:28:57.748997 kubelet[2535]: E0904 20:28:57.747158 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:28:57.748997 kubelet[2535]: E0904 20:28:57.748925 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:28:57.748997 kubelet[2535]: W0904 20:28:57.748938 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:28:57.750013 kubelet[2535]: E0904 20:28:57.749016 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:28:57.750013 kubelet[2535]: E0904 20:28:57.749216 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:28:57.750013 kubelet[2535]: W0904 20:28:57.749250 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:28:57.750013 kubelet[2535]: E0904 20:28:57.749334 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:28:57.750013 kubelet[2535]: E0904 20:28:57.749519 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:28:57.750013 kubelet[2535]: W0904 20:28:57.749530 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:28:57.750013 kubelet[2535]: E0904 20:28:57.749611 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:28:57.750013 kubelet[2535]: E0904 20:28:57.749882 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:28:57.750013 kubelet[2535]: W0904 20:28:57.749894 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:28:57.750013 kubelet[2535]: E0904 20:28:57.749987 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:28:57.755532 kubelet[2535]: E0904 20:28:57.750294 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:28:57.755532 kubelet[2535]: W0904 20:28:57.750304 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:28:57.755532 kubelet[2535]: E0904 20:28:57.750323 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:28:57.755532 kubelet[2535]: E0904 20:28:57.751476 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:28:57.755532 kubelet[2535]: W0904 20:28:57.751488 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:28:57.755532 kubelet[2535]: E0904 20:28:57.751512 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:28:57.755532 kubelet[2535]: E0904 20:28:57.751725 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:28:57.755532 kubelet[2535]: W0904 20:28:57.751734 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:28:57.755532 kubelet[2535]: E0904 20:28:57.751811 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:28:57.755532 kubelet[2535]: E0904 20:28:57.751946 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:28:57.755931 kubelet[2535]: W0904 20:28:57.751952 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:28:57.755931 kubelet[2535]: E0904 20:28:57.751969 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:28:57.755931 kubelet[2535]: E0904 20:28:57.752180 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:28:57.755931 kubelet[2535]: W0904 20:28:57.752188 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:28:57.755931 kubelet[2535]: E0904 20:28:57.752202 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:28:57.755931 kubelet[2535]: E0904 20:28:57.753590 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:28:57.755931 kubelet[2535]: W0904 20:28:57.753605 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:28:57.755931 kubelet[2535]: E0904 20:28:57.753621 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:28:57.773791 kubelet[2535]: E0904 20:28:57.769954 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:28:57.773791 kubelet[2535]: W0904 20:28:57.769978 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:28:57.773791 kubelet[2535]: E0904 20:28:57.770011 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:28:57.779330 containerd[1468]: time="2024-09-04T20:28:57.778852148Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 20:28:57.779330 containerd[1468]: time="2024-09-04T20:28:57.778943196Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 20:28:57.779330 containerd[1468]: time="2024-09-04T20:28:57.778984262Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 20:28:57.779330 containerd[1468]: time="2024-09-04T20:28:57.779006133Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 20:28:57.809552 systemd[1]: Started cri-containerd-71f60ac97b5c6a245b9f0ac4c429219a25b7bedb187d703119bfd6aab1d5b280.scope - libcontainer container 71f60ac97b5c6a245b9f0ac4c429219a25b7bedb187d703119bfd6aab1d5b280. Sep 4 20:28:57.833586 containerd[1468]: time="2024-09-04T20:28:57.833412637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-55bf7c7f47-nx79w,Uid:47913963-0021-43f4-a678-6cacd755f794,Namespace:calico-system,Attempt:0,} returns sandbox id \"5f8ef57d68b267d83fcb3ae4ac6991c48baa0df82e4499ab5ee36e813c00e7fd\"" Sep 4 20:28:57.835353 kubelet[2535]: E0904 20:28:57.835194 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 4 20:28:57.839607 containerd[1468]: time="2024-09-04T20:28:57.839552962Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\"" Sep 4 20:28:57.901024 containerd[1468]: time="2024-09-04T20:28:57.900963287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-q88qq,Uid:000e2ec9-7c58-4c39-a0c6-dc41ecb69b1a,Namespace:calico-system,Attempt:0,} returns sandbox id \"71f60ac97b5c6a245b9f0ac4c429219a25b7bedb187d703119bfd6aab1d5b280\"" Sep 4 20:28:57.903123 kubelet[2535]: E0904 20:28:57.903065 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 4 20:28:59.661325 kubelet[2535]: E0904 20:28:59.657206 2535 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bsn66" podUID="c943a141-800d-4fd7-b526-a302d60b317a" Sep 4 20:29:00.326528 containerd[1468]: time="2024-09-04T20:29:00.326457103Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:29:00.327814 containerd[1468]: time="2024-09-04T20:29:00.327739237Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.1: active requests=0, bytes read=29471335" Sep 4 20:29:00.329284 containerd[1468]: time="2024-09-04T20:29:00.328804044Z" level=info msg="ImageCreate event name:\"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:29:00.346092 containerd[1468]: time="2024-09-04T20:29:00.345946738Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:29:00.347242 containerd[1468]: time="2024-09-04T20:29:00.347167820Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.1\" with image id \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\", size \"30963728\" in 2.50756226s" Sep 4 20:29:00.347242 containerd[1468]: time="2024-09-04T20:29:00.347213536Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\" returns image reference \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\"" Sep 4 20:29:00.353832 containerd[1468]: time="2024-09-04T20:29:00.351998543Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\"" Sep 4 20:29:00.389911 containerd[1468]: time="2024-09-04T20:29:00.389864822Z" level=info msg="CreateContainer within sandbox \"5f8ef57d68b267d83fcb3ae4ac6991c48baa0df82e4499ab5ee36e813c00e7fd\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 4 20:29:00.421447 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2926016336.mount: Deactivated successfully. Sep 4 20:29:00.425780 containerd[1468]: time="2024-09-04T20:29:00.425731777Z" level=info msg="CreateContainer within sandbox \"5f8ef57d68b267d83fcb3ae4ac6991c48baa0df82e4499ab5ee36e813c00e7fd\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"49275e88ad974c05446506f9c06e069b6212030a14f6c1946a0e7e02dd2039da\"" Sep 4 20:29:00.426868 containerd[1468]: time="2024-09-04T20:29:00.426821165Z" level=info msg="StartContainer for \"49275e88ad974c05446506f9c06e069b6212030a14f6c1946a0e7e02dd2039da\"" Sep 4 20:29:00.485779 systemd[1]: Started cri-containerd-49275e88ad974c05446506f9c06e069b6212030a14f6c1946a0e7e02dd2039da.scope - libcontainer container 49275e88ad974c05446506f9c06e069b6212030a14f6c1946a0e7e02dd2039da. Sep 4 20:29:00.552084 containerd[1468]: time="2024-09-04T20:29:00.551944109Z" level=info msg="StartContainer for \"49275e88ad974c05446506f9c06e069b6212030a14f6c1946a0e7e02dd2039da\" returns successfully" Sep 4 20:29:00.785938 kubelet[2535]: E0904 20:29:00.785797 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 4 20:29:00.850394 kubelet[2535]: E0904 20:29:00.850263 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:00.850878 kubelet[2535]: W0904 20:29:00.850606 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:00.850878 kubelet[2535]: E0904 20:29:00.850651 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:00.851255 kubelet[2535]: E0904 20:29:00.851067 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:00.851255 kubelet[2535]: W0904 20:29:00.851082 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:00.851255 kubelet[2535]: E0904 20:29:00.851098 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:00.851562 kubelet[2535]: E0904 20:29:00.851547 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:00.851648 kubelet[2535]: W0904 20:29:00.851627 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:00.851760 kubelet[2535]: E0904 20:29:00.851747 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:00.852444 kubelet[2535]: E0904 20:29:00.852425 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:00.852654 kubelet[2535]: W0904 20:29:00.852515 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:00.852654 kubelet[2535]: E0904 20:29:00.852534 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:00.853078 kubelet[2535]: E0904 20:29:00.853016 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:00.853441 kubelet[2535]: W0904 20:29:00.853191 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:00.853441 kubelet[2535]: E0904 20:29:00.853264 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:00.853829 kubelet[2535]: E0904 20:29:00.853800 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:00.854007 kubelet[2535]: W0904 20:29:00.853894 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:00.854007 kubelet[2535]: E0904 20:29:00.853911 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:00.854836 kubelet[2535]: E0904 20:29:00.854681 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:00.854836 kubelet[2535]: W0904 20:29:00.854701 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:00.854836 kubelet[2535]: E0904 20:29:00.854717 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:00.855178 kubelet[2535]: E0904 20:29:00.855147 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:00.855497 kubelet[2535]: W0904 20:29:00.855400 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:00.855497 kubelet[2535]: E0904 20:29:00.855422 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:00.856358 kubelet[2535]: E0904 20:29:00.856138 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:00.856358 kubelet[2535]: W0904 20:29:00.856158 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:00.856358 kubelet[2535]: E0904 20:29:00.856173 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:00.856550 kubelet[2535]: E0904 20:29:00.856538 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:00.856601 kubelet[2535]: W0904 20:29:00.856592 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:00.856663 kubelet[2535]: E0904 20:29:00.856644 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:00.857090 kubelet[2535]: E0904 20:29:00.856958 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:00.857090 kubelet[2535]: W0904 20:29:00.856977 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:00.857090 kubelet[2535]: E0904 20:29:00.856992 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:00.857334 kubelet[2535]: E0904 20:29:00.857310 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:00.857464 kubelet[2535]: W0904 20:29:00.857405 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:00.857537 kubelet[2535]: E0904 20:29:00.857525 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:00.857943 kubelet[2535]: E0904 20:29:00.857922 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:00.858182 kubelet[2535]: W0904 20:29:00.858029 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:00.858182 kubelet[2535]: E0904 20:29:00.858051 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:00.858681 kubelet[2535]: E0904 20:29:00.858627 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:00.858841 kubelet[2535]: W0904 20:29:00.858809 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:00.859143 kubelet[2535]: E0904 20:29:00.858939 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:00.859544 kubelet[2535]: E0904 20:29:00.859405 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:00.859544 kubelet[2535]: W0904 20:29:00.859422 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:00.859544 kubelet[2535]: E0904 20:29:00.859438 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:00.869659 kubelet[2535]: E0904 20:29:00.868512 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:00.869659 kubelet[2535]: W0904 20:29:00.868548 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:00.869659 kubelet[2535]: E0904 20:29:00.868573 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:00.869659 kubelet[2535]: E0904 20:29:00.868950 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:00.869659 kubelet[2535]: W0904 20:29:00.868961 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:00.869659 kubelet[2535]: E0904 20:29:00.868978 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:00.869659 kubelet[2535]: E0904 20:29:00.869259 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:00.869659 kubelet[2535]: W0904 20:29:00.869273 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:00.869659 kubelet[2535]: E0904 20:29:00.869291 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:00.869659 kubelet[2535]: E0904 20:29:00.869546 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:00.870191 kubelet[2535]: W0904 20:29:00.869584 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:00.870191 kubelet[2535]: E0904 20:29:00.869600 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:00.870191 kubelet[2535]: E0904 20:29:00.869844 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:00.870191 kubelet[2535]: W0904 20:29:00.869873 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:00.870191 kubelet[2535]: E0904 20:29:00.869888 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:00.870191 kubelet[2535]: E0904 20:29:00.870066 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:00.870191 kubelet[2535]: W0904 20:29:00.870075 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:00.870191 kubelet[2535]: E0904 20:29:00.870084 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:00.871355 kubelet[2535]: E0904 20:29:00.871305 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:00.871355 kubelet[2535]: W0904 20:29:00.871330 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:00.871803 kubelet[2535]: E0904 20:29:00.871472 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:00.872182 kubelet[2535]: E0904 20:29:00.872151 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:00.872182 kubelet[2535]: W0904 20:29:00.872174 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:00.872628 kubelet[2535]: E0904 20:29:00.872331 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:00.872628 kubelet[2535]: E0904 20:29:00.872604 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:00.872628 kubelet[2535]: W0904 20:29:00.872622 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:00.872773 kubelet[2535]: E0904 20:29:00.872731 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:00.872971 kubelet[2535]: E0904 20:29:00.872953 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:00.872971 kubelet[2535]: W0904 20:29:00.872963 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:00.872971 kubelet[2535]: E0904 20:29:00.872976 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:00.873315 kubelet[2535]: E0904 20:29:00.873296 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:00.873315 kubelet[2535]: W0904 20:29:00.873312 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:00.873414 kubelet[2535]: E0904 20:29:00.873332 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:00.873615 kubelet[2535]: E0904 20:29:00.873593 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:00.873615 kubelet[2535]: W0904 20:29:00.873612 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:00.873814 kubelet[2535]: E0904 20:29:00.873638 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:00.874062 kubelet[2535]: E0904 20:29:00.874041 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:00.874112 kubelet[2535]: W0904 20:29:00.874064 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:00.874861 kubelet[2535]: E0904 20:29:00.874378 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:00.874861 kubelet[2535]: W0904 20:29:00.874392 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:00.874861 kubelet[2535]: E0904 20:29:00.874386 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:00.874861 kubelet[2535]: E0904 20:29:00.874405 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:00.874861 kubelet[2535]: E0904 20:29:00.874555 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:00.874861 kubelet[2535]: W0904 20:29:00.874561 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:00.874861 kubelet[2535]: E0904 20:29:00.874569 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:00.876464 kubelet[2535]: E0904 20:29:00.875847 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:00.876464 kubelet[2535]: W0904 20:29:00.875871 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:00.876464 kubelet[2535]: E0904 20:29:00.875896 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:00.878153 kubelet[2535]: E0904 20:29:00.878123 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:00.878412 kubelet[2535]: W0904 20:29:00.878358 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:00.878612 kubelet[2535]: E0904 20:29:00.878527 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:00.879096 kubelet[2535]: E0904 20:29:00.879006 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:00.879096 kubelet[2535]: W0904 20:29:00.879025 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:00.879096 kubelet[2535]: E0904 20:29:00.879048 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:01.656833 kubelet[2535]: E0904 20:29:01.656409 2535 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bsn66" podUID="c943a141-800d-4fd7-b526-a302d60b317a" Sep 4 20:29:01.789388 kubelet[2535]: I0904 20:29:01.789326 2535 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 20:29:01.792843 kubelet[2535]: E0904 20:29:01.792687 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 4 20:29:01.868821 kubelet[2535]: E0904 20:29:01.868574 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:01.868821 kubelet[2535]: W0904 20:29:01.868612 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:01.868821 kubelet[2535]: E0904 20:29:01.868647 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:01.870055 kubelet[2535]: E0904 20:29:01.869412 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:01.870055 kubelet[2535]: W0904 20:29:01.869444 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:01.870055 kubelet[2535]: E0904 20:29:01.869471 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:01.870890 kubelet[2535]: E0904 20:29:01.870411 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:01.870890 kubelet[2535]: W0904 20:29:01.870455 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:01.870890 kubelet[2535]: E0904 20:29:01.870477 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:01.871993 kubelet[2535]: E0904 20:29:01.871740 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:01.871993 kubelet[2535]: W0904 20:29:01.871770 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:01.871993 kubelet[2535]: E0904 20:29:01.871801 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:01.873147 kubelet[2535]: E0904 20:29:01.872753 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:01.873147 kubelet[2535]: W0904 20:29:01.872783 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:01.873147 kubelet[2535]: E0904 20:29:01.872807 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:01.873765 kubelet[2535]: E0904 20:29:01.873546 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:01.873765 kubelet[2535]: W0904 20:29:01.873568 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:01.873765 kubelet[2535]: E0904 20:29:01.873599 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:01.874594 kubelet[2535]: E0904 20:29:01.874185 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:01.874594 kubelet[2535]: W0904 20:29:01.874207 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:01.875051 kubelet[2535]: E0904 20:29:01.874269 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:01.875217 kubelet[2535]: E0904 20:29:01.875200 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:01.875342 kubelet[2535]: W0904 20:29:01.875323 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:01.875543 kubelet[2535]: E0904 20:29:01.875426 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:01.876279 kubelet[2535]: E0904 20:29:01.876130 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:01.876279 kubelet[2535]: W0904 20:29:01.876150 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:01.876279 kubelet[2535]: E0904 20:29:01.876174 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:01.877325 kubelet[2535]: E0904 20:29:01.877040 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:01.877325 kubelet[2535]: W0904 20:29:01.877061 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:01.877325 kubelet[2535]: E0904 20:29:01.877079 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:01.877834 kubelet[2535]: E0904 20:29:01.877700 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:01.877834 kubelet[2535]: W0904 20:29:01.877721 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:01.877834 kubelet[2535]: E0904 20:29:01.877741 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:01.878664 kubelet[2535]: E0904 20:29:01.878449 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:01.878664 kubelet[2535]: W0904 20:29:01.878475 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:01.878664 kubelet[2535]: E0904 20:29:01.878498 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:01.879047 kubelet[2535]: E0904 20:29:01.878974 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:01.879047 kubelet[2535]: W0904 20:29:01.878996 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:01.879331 kubelet[2535]: E0904 20:29:01.879014 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:01.879630 kubelet[2535]: E0904 20:29:01.879591 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:01.879630 kubelet[2535]: W0904 20:29:01.879616 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:01.879734 kubelet[2535]: E0904 20:29:01.879635 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:01.880029 kubelet[2535]: E0904 20:29:01.879998 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:01.880096 kubelet[2535]: W0904 20:29:01.880067 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:01.880096 kubelet[2535]: E0904 20:29:01.880086 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:01.880983 kubelet[2535]: E0904 20:29:01.880956 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:01.880983 kubelet[2535]: W0904 20:29:01.880979 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:01.881140 kubelet[2535]: E0904 20:29:01.880998 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:01.882031 kubelet[2535]: E0904 20:29:01.882007 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:01.882031 kubelet[2535]: W0904 20:29:01.882030 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:01.882187 kubelet[2535]: E0904 20:29:01.882056 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:01.882921 kubelet[2535]: E0904 20:29:01.882480 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:01.882921 kubelet[2535]: W0904 20:29:01.882500 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:01.883188 kubelet[2535]: E0904 20:29:01.883063 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:01.883588 kubelet[2535]: E0904 20:29:01.883557 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:01.883588 kubelet[2535]: W0904 20:29:01.883586 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:01.883995 kubelet[2535]: E0904 20:29:01.883776 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:01.884428 kubelet[2535]: E0904 20:29:01.884240 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:01.884428 kubelet[2535]: W0904 20:29:01.884266 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:01.884428 kubelet[2535]: E0904 20:29:01.884401 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:01.884994 kubelet[2535]: E0904 20:29:01.884789 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:01.884994 kubelet[2535]: W0904 20:29:01.884812 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:01.885807 kubelet[2535]: E0904 20:29:01.885748 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:01.886423 kubelet[2535]: E0904 20:29:01.886127 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:01.886423 kubelet[2535]: W0904 20:29:01.886158 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:01.886698 kubelet[2535]: E0904 20:29:01.886597 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:01.886871 kubelet[2535]: E0904 20:29:01.886794 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:01.886871 kubelet[2535]: W0904 20:29:01.886812 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:01.887661 kubelet[2535]: E0904 20:29:01.886993 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:01.887661 kubelet[2535]: E0904 20:29:01.887572 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:01.888544 kubelet[2535]: W0904 20:29:01.887589 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:01.888788 kubelet[2535]: E0904 20:29:01.888731 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:01.889249 kubelet[2535]: E0904 20:29:01.889012 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:01.889249 kubelet[2535]: W0904 20:29:01.889026 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:01.889485 kubelet[2535]: E0904 20:29:01.889392 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:01.890713 kubelet[2535]: E0904 20:29:01.890552 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:01.890713 kubelet[2535]: W0904 20:29:01.890598 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:01.890713 kubelet[2535]: E0904 20:29:01.890646 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:01.891961 kubelet[2535]: E0904 20:29:01.891491 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:01.891961 kubelet[2535]: W0904 20:29:01.891515 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:01.891961 kubelet[2535]: E0904 20:29:01.891672 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:01.892907 kubelet[2535]: E0904 20:29:01.892764 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:01.892907 kubelet[2535]: W0904 20:29:01.892785 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:01.893058 kubelet[2535]: E0904 20:29:01.893011 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:01.896519 kubelet[2535]: E0904 20:29:01.895773 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:01.896519 kubelet[2535]: W0904 20:29:01.895812 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:01.896519 kubelet[2535]: E0904 20:29:01.895879 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:01.897971 kubelet[2535]: E0904 20:29:01.897775 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:01.897971 kubelet[2535]: W0904 20:29:01.897821 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:01.898169 kubelet[2535]: E0904 20:29:01.897982 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:01.901128 kubelet[2535]: E0904 20:29:01.900722 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:01.901128 kubelet[2535]: W0904 20:29:01.900780 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:01.902637 kubelet[2535]: E0904 20:29:01.901982 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:01.902637 kubelet[2535]: E0904 20:29:01.902458 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:01.902637 kubelet[2535]: W0904 20:29:01.902505 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:01.902916 kubelet[2535]: E0904 20:29:01.902674 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:01.904286 kubelet[2535]: E0904 20:29:01.903822 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 20:29:01.904286 kubelet[2535]: W0904 20:29:01.903856 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 20:29:01.904286 kubelet[2535]: E0904 20:29:01.903885 2535 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 20:29:01.974574 containerd[1468]: time="2024-09-04T20:29:01.972999493Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:29:01.974574 containerd[1468]: time="2024-09-04T20:29:01.973971806Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1: active requests=0, bytes read=5141007" Sep 4 20:29:01.974574 containerd[1468]: time="2024-09-04T20:29:01.974346350Z" level=info msg="ImageCreate event name:\"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:29:01.979734 containerd[1468]: time="2024-09-04T20:29:01.977659709Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:29:01.982263 containerd[1468]: time="2024-09-04T20:29:01.979111682Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" with image id \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\", size \"6633368\" in 1.6270591s" Sep 4 20:29:01.982263 containerd[1468]: time="2024-09-04T20:29:01.981496765Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" returns image reference \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\"" Sep 4 20:29:01.991134 containerd[1468]: time="2024-09-04T20:29:01.991061640Z" level=info msg="CreateContainer within sandbox \"71f60ac97b5c6a245b9f0ac4c429219a25b7bedb187d703119bfd6aab1d5b280\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 4 20:29:02.060904 containerd[1468]: time="2024-09-04T20:29:02.060805653Z" level=info msg="CreateContainer within sandbox \"71f60ac97b5c6a245b9f0ac4c429219a25b7bedb187d703119bfd6aab1d5b280\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a92de0ac598c073227847e2a8f4b429c06b73eebbdb5f8184ca47ad4b635a10c\"" Sep 4 20:29:02.061959 containerd[1468]: time="2024-09-04T20:29:02.061846663Z" level=info msg="StartContainer for \"a92de0ac598c073227847e2a8f4b429c06b73eebbdb5f8184ca47ad4b635a10c\"" Sep 4 20:29:02.120976 systemd[1]: Started cri-containerd-a92de0ac598c073227847e2a8f4b429c06b73eebbdb5f8184ca47ad4b635a10c.scope - libcontainer container a92de0ac598c073227847e2a8f4b429c06b73eebbdb5f8184ca47ad4b635a10c. Sep 4 20:29:02.178195 containerd[1468]: time="2024-09-04T20:29:02.178024068Z" level=info msg="StartContainer for \"a92de0ac598c073227847e2a8f4b429c06b73eebbdb5f8184ca47ad4b635a10c\" returns successfully" Sep 4 20:29:02.201560 systemd[1]: cri-containerd-a92de0ac598c073227847e2a8f4b429c06b73eebbdb5f8184ca47ad4b635a10c.scope: Deactivated successfully. Sep 4 20:29:02.262113 containerd[1468]: time="2024-09-04T20:29:02.262018142Z" level=info msg="shim disconnected" id=a92de0ac598c073227847e2a8f4b429c06b73eebbdb5f8184ca47ad4b635a10c namespace=k8s.io Sep 4 20:29:02.262817 containerd[1468]: time="2024-09-04T20:29:02.262520037Z" level=warning msg="cleaning up after shim disconnected" id=a92de0ac598c073227847e2a8f4b429c06b73eebbdb5f8184ca47ad4b635a10c namespace=k8s.io Sep 4 20:29:02.262817 containerd[1468]: time="2024-09-04T20:29:02.262553340Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 20:29:02.374396 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a92de0ac598c073227847e2a8f4b429c06b73eebbdb5f8184ca47ad4b635a10c-rootfs.mount: Deactivated successfully. Sep 4 20:29:02.795146 kubelet[2535]: E0904 20:29:02.795092 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 4 20:29:02.797026 containerd[1468]: time="2024-09-04T20:29:02.796973413Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\"" Sep 4 20:29:02.828764 kubelet[2535]: I0904 20:29:02.828681 2535 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-55bf7c7f47-nx79w" podStartSLOduration=3.315787058 podStartE2EDuration="5.828656765s" podCreationTimestamp="2024-09-04 20:28:57 +0000 UTC" firstStartedPulling="2024-09-04 20:28:57.83833899 +0000 UTC m=+22.308907782" lastFinishedPulling="2024-09-04 20:29:00.351208688 +0000 UTC m=+24.821777489" observedRunningTime="2024-09-04 20:29:00.814642314 +0000 UTC m=+25.285211124" watchObservedRunningTime="2024-09-04 20:29:02.828656765 +0000 UTC m=+27.299225602" Sep 4 20:29:03.656977 kubelet[2535]: E0904 20:29:03.656566 2535 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bsn66" podUID="c943a141-800d-4fd7-b526-a302d60b317a" Sep 4 20:29:05.658289 kubelet[2535]: E0904 20:29:05.657210 2535 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bsn66" podUID="c943a141-800d-4fd7-b526-a302d60b317a" Sep 4 20:29:07.655860 kubelet[2535]: E0904 20:29:07.655780 2535 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bsn66" podUID="c943a141-800d-4fd7-b526-a302d60b317a" Sep 4 20:29:07.925981 containerd[1468]: time="2024-09-04T20:29:07.925818483Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:29:07.927830 containerd[1468]: time="2024-09-04T20:29:07.927194603Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.1: active requests=0, bytes read=93083736" Sep 4 20:29:07.927830 containerd[1468]: time="2024-09-04T20:29:07.927369666Z" level=info msg="ImageCreate event name:\"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:29:07.930591 containerd[1468]: time="2024-09-04T20:29:07.930530547Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:29:07.932116 containerd[1468]: time="2024-09-04T20:29:07.932052939Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.1\" with image id \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\", size \"94576137\" in 5.135030943s" Sep 4 20:29:07.932116 containerd[1468]: time="2024-09-04T20:29:07.932111344Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\" returns image reference \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\"" Sep 4 20:29:07.936698 containerd[1468]: time="2024-09-04T20:29:07.936641939Z" level=info msg="CreateContainer within sandbox \"71f60ac97b5c6a245b9f0ac4c429219a25b7bedb187d703119bfd6aab1d5b280\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 4 20:29:07.955460 containerd[1468]: time="2024-09-04T20:29:07.955371649Z" level=info msg="CreateContainer within sandbox \"71f60ac97b5c6a245b9f0ac4c429219a25b7bedb187d703119bfd6aab1d5b280\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"621a1c4a6a392d4aba5d2b45c8f55f6f5d7cf4eb3f1abb46bbab3c713077da32\"" Sep 4 20:29:07.958271 containerd[1468]: time="2024-09-04T20:29:07.957630962Z" level=info msg="StartContainer for \"621a1c4a6a392d4aba5d2b45c8f55f6f5d7cf4eb3f1abb46bbab3c713077da32\"" Sep 4 20:29:08.063294 systemd[1]: run-containerd-runc-k8s.io-621a1c4a6a392d4aba5d2b45c8f55f6f5d7cf4eb3f1abb46bbab3c713077da32-runc.1eVl8m.mount: Deactivated successfully. Sep 4 20:29:08.076447 systemd[1]: Started cri-containerd-621a1c4a6a392d4aba5d2b45c8f55f6f5d7cf4eb3f1abb46bbab3c713077da32.scope - libcontainer container 621a1c4a6a392d4aba5d2b45c8f55f6f5d7cf4eb3f1abb46bbab3c713077da32. Sep 4 20:29:08.120059 containerd[1468]: time="2024-09-04T20:29:08.120007531Z" level=info msg="StartContainer for \"621a1c4a6a392d4aba5d2b45c8f55f6f5d7cf4eb3f1abb46bbab3c713077da32\" returns successfully" Sep 4 20:29:08.720052 systemd[1]: cri-containerd-621a1c4a6a392d4aba5d2b45c8f55f6f5d7cf4eb3f1abb46bbab3c713077da32.scope: Deactivated successfully. Sep 4 20:29:08.775302 kubelet[2535]: I0904 20:29:08.771190 2535 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Sep 4 20:29:08.772549 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-621a1c4a6a392d4aba5d2b45c8f55f6f5d7cf4eb3f1abb46bbab3c713077da32-rootfs.mount: Deactivated successfully. Sep 4 20:29:08.780050 containerd[1468]: time="2024-09-04T20:29:08.779735073Z" level=info msg="shim disconnected" id=621a1c4a6a392d4aba5d2b45c8f55f6f5d7cf4eb3f1abb46bbab3c713077da32 namespace=k8s.io Sep 4 20:29:08.780050 containerd[1468]: time="2024-09-04T20:29:08.779810408Z" level=warning msg="cleaning up after shim disconnected" id=621a1c4a6a392d4aba5d2b45c8f55f6f5d7cf4eb3f1abb46bbab3c713077da32 namespace=k8s.io Sep 4 20:29:08.780050 containerd[1468]: time="2024-09-04T20:29:08.779820225Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 20:29:08.819798 kubelet[2535]: I0904 20:29:08.819743 2535 topology_manager.go:215] "Topology Admit Handler" podUID="b99452b1-3cd1-4b03-b372-8f4d1888f969" podNamespace="kube-system" podName="coredns-7db6d8ff4d-lw7kj" Sep 4 20:29:08.823557 kubelet[2535]: E0904 20:29:08.822262 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 4 20:29:08.826659 kubelet[2535]: I0904 20:29:08.826600 2535 topology_manager.go:215] "Topology Admit Handler" podUID="d9d91078-829f-4383-9939-2801dd36abf2" podNamespace="calico-system" podName="calico-kube-controllers-68ddb4b88b-9zbcx" Sep 4 20:29:08.830918 containerd[1468]: time="2024-09-04T20:29:08.829981524Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\"" Sep 4 20:29:08.845777 kubelet[2535]: I0904 20:29:08.843727 2535 topology_manager.go:215] "Topology Admit Handler" podUID="cf38c19c-6a60-4a36-8a3a-1b512f9dcdfb" podNamespace="kube-system" podName="coredns-7db6d8ff4d-kfqv7" Sep 4 20:29:08.866605 systemd[1]: Created slice kubepods-burstable-podb99452b1_3cd1_4b03_b372_8f4d1888f969.slice - libcontainer container kubepods-burstable-podb99452b1_3cd1_4b03_b372_8f4d1888f969.slice. Sep 4 20:29:08.891801 systemd[1]: Created slice kubepods-besteffort-podd9d91078_829f_4383_9939_2801dd36abf2.slice - libcontainer container kubepods-besteffort-podd9d91078_829f_4383_9939_2801dd36abf2.slice. Sep 4 20:29:08.908531 systemd[1]: Created slice kubepods-burstable-podcf38c19c_6a60_4a36_8a3a_1b512f9dcdfb.slice - libcontainer container kubepods-burstable-podcf38c19c_6a60_4a36_8a3a_1b512f9dcdfb.slice. Sep 4 20:29:08.944269 kubelet[2535]: I0904 20:29:08.943874 2535 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cf38c19c-6a60-4a36-8a3a-1b512f9dcdfb-config-volume\") pod \"coredns-7db6d8ff4d-kfqv7\" (UID: \"cf38c19c-6a60-4a36-8a3a-1b512f9dcdfb\") " pod="kube-system/coredns-7db6d8ff4d-kfqv7" Sep 4 20:29:08.944269 kubelet[2535]: I0904 20:29:08.943921 2535 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d9d91078-829f-4383-9939-2801dd36abf2-tigera-ca-bundle\") pod \"calico-kube-controllers-68ddb4b88b-9zbcx\" (UID: \"d9d91078-829f-4383-9939-2801dd36abf2\") " pod="calico-system/calico-kube-controllers-68ddb4b88b-9zbcx" Sep 4 20:29:08.944269 kubelet[2535]: I0904 20:29:08.943960 2535 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkh54\" (UniqueName: \"kubernetes.io/projected/b99452b1-3cd1-4b03-b372-8f4d1888f969-kube-api-access-vkh54\") pod \"coredns-7db6d8ff4d-lw7kj\" (UID: \"b99452b1-3cd1-4b03-b372-8f4d1888f969\") " pod="kube-system/coredns-7db6d8ff4d-lw7kj" Sep 4 20:29:08.944269 kubelet[2535]: I0904 20:29:08.943976 2535 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b99452b1-3cd1-4b03-b372-8f4d1888f969-config-volume\") pod \"coredns-7db6d8ff4d-lw7kj\" (UID: \"b99452b1-3cd1-4b03-b372-8f4d1888f969\") " pod="kube-system/coredns-7db6d8ff4d-lw7kj" Sep 4 20:29:08.944269 kubelet[2535]: I0904 20:29:08.943995 2535 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzv6j\" (UniqueName: \"kubernetes.io/projected/d9d91078-829f-4383-9939-2801dd36abf2-kube-api-access-zzv6j\") pod \"calico-kube-controllers-68ddb4b88b-9zbcx\" (UID: \"d9d91078-829f-4383-9939-2801dd36abf2\") " pod="calico-system/calico-kube-controllers-68ddb4b88b-9zbcx" Sep 4 20:29:08.944803 kubelet[2535]: I0904 20:29:08.944012 2535 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkg82\" (UniqueName: \"kubernetes.io/projected/cf38c19c-6a60-4a36-8a3a-1b512f9dcdfb-kube-api-access-fkg82\") pod \"coredns-7db6d8ff4d-kfqv7\" (UID: \"cf38c19c-6a60-4a36-8a3a-1b512f9dcdfb\") " pod="kube-system/coredns-7db6d8ff4d-kfqv7" Sep 4 20:29:09.183396 kubelet[2535]: E0904 20:29:09.182640 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 4 20:29:09.184684 containerd[1468]: time="2024-09-04T20:29:09.183972762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-lw7kj,Uid:b99452b1-3cd1-4b03-b372-8f4d1888f969,Namespace:kube-system,Attempt:0,}" Sep 4 20:29:09.204118 containerd[1468]: time="2024-09-04T20:29:09.204061063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68ddb4b88b-9zbcx,Uid:d9d91078-829f-4383-9939-2801dd36abf2,Namespace:calico-system,Attempt:0,}" Sep 4 20:29:09.217254 kubelet[2535]: E0904 20:29:09.214784 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 4 20:29:09.227275 containerd[1468]: time="2024-09-04T20:29:09.225950441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-kfqv7,Uid:cf38c19c-6a60-4a36-8a3a-1b512f9dcdfb,Namespace:kube-system,Attempt:0,}" Sep 4 20:29:09.506471 containerd[1468]: time="2024-09-04T20:29:09.506381067Z" level=error msg="Failed to destroy network for sandbox \"85bc2484df33423ca10209e10cfda548e62b4c2cf1825fff7e60e36b7c284d3f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 20:29:09.506835 containerd[1468]: time="2024-09-04T20:29:09.506799478Z" level=error msg="Failed to destroy network for sandbox \"0e880ca8912c3282bfb832b03f271edce6beab6749ee5279266d364e30b0db7c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 20:29:09.530539 containerd[1468]: time="2024-09-04T20:29:09.530440239Z" level=error msg="encountered an error cleaning up failed sandbox \"85bc2484df33423ca10209e10cfda548e62b4c2cf1825fff7e60e36b7c284d3f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 20:29:09.530871 containerd[1468]: time="2024-09-04T20:29:09.530828573Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-kfqv7,Uid:cf38c19c-6a60-4a36-8a3a-1b512f9dcdfb,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"85bc2484df33423ca10209e10cfda548e62b4c2cf1825fff7e60e36b7c284d3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 20:29:09.531092 containerd[1468]: time="2024-09-04T20:29:09.530940192Z" level=error msg="encountered an error cleaning up failed sandbox \"0e880ca8912c3282bfb832b03f271edce6beab6749ee5279266d364e30b0db7c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 20:29:09.531273 containerd[1468]: time="2024-09-04T20:29:09.531240075Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-lw7kj,Uid:b99452b1-3cd1-4b03-b372-8f4d1888f969,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0e880ca8912c3282bfb832b03f271edce6beab6749ee5279266d364e30b0db7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 20:29:09.531471 containerd[1468]: time="2024-09-04T20:29:09.530557752Z" level=error msg="Failed to destroy network for sandbox \"d0f5dac683fe78e0ce0bca7ac04d047dc29a89cb2b5a06a87820f915c48d5f43\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 20:29:09.532269 kubelet[2535]: E0904 20:29:09.531838 2535 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85bc2484df33423ca10209e10cfda548e62b4c2cf1825fff7e60e36b7c284d3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 20:29:09.532269 kubelet[2535]: E0904 20:29:09.531968 2535 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85bc2484df33423ca10209e10cfda548e62b4c2cf1825fff7e60e36b7c284d3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-kfqv7" Sep 4 20:29:09.532269 kubelet[2535]: E0904 20:29:09.532007 2535 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85bc2484df33423ca10209e10cfda548e62b4c2cf1825fff7e60e36b7c284d3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-kfqv7" Sep 4 20:29:09.533640 kubelet[2535]: E0904 20:29:09.532077 2535 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-kfqv7_kube-system(cf38c19c-6a60-4a36-8a3a-1b512f9dcdfb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-kfqv7_kube-system(cf38c19c-6a60-4a36-8a3a-1b512f9dcdfb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"85bc2484df33423ca10209e10cfda548e62b4c2cf1825fff7e60e36b7c284d3f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-kfqv7" podUID="cf38c19c-6a60-4a36-8a3a-1b512f9dcdfb" Sep 4 20:29:09.533640 kubelet[2535]: E0904 20:29:09.531844 2535 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e880ca8912c3282bfb832b03f271edce6beab6749ee5279266d364e30b0db7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 20:29:09.533640 kubelet[2535]: E0904 20:29:09.532460 2535 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e880ca8912c3282bfb832b03f271edce6beab6749ee5279266d364e30b0db7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-lw7kj" Sep 4 20:29:09.533860 containerd[1468]: time="2024-09-04T20:29:09.532711926Z" level=error msg="encountered an error cleaning up failed sandbox \"d0f5dac683fe78e0ce0bca7ac04d047dc29a89cb2b5a06a87820f915c48d5f43\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 20:29:09.533860 containerd[1468]: time="2024-09-04T20:29:09.532790522Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68ddb4b88b-9zbcx,Uid:d9d91078-829f-4383-9939-2801dd36abf2,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d0f5dac683fe78e0ce0bca7ac04d047dc29a89cb2b5a06a87820f915c48d5f43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 20:29:09.534011 kubelet[2535]: E0904 20:29:09.532500 2535 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e880ca8912c3282bfb832b03f271edce6beab6749ee5279266d364e30b0db7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-lw7kj" Sep 4 20:29:09.534011 kubelet[2535]: E0904 20:29:09.532703 2535 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-lw7kj_kube-system(b99452b1-3cd1-4b03-b372-8f4d1888f969)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-lw7kj_kube-system(b99452b1-3cd1-4b03-b372-8f4d1888f969)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0e880ca8912c3282bfb832b03f271edce6beab6749ee5279266d364e30b0db7c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-lw7kj" podUID="b99452b1-3cd1-4b03-b372-8f4d1888f969" Sep 4 20:29:09.534011 kubelet[2535]: E0904 20:29:09.533325 2535 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0f5dac683fe78e0ce0bca7ac04d047dc29a89cb2b5a06a87820f915c48d5f43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 20:29:09.534178 kubelet[2535]: E0904 20:29:09.533542 2535 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0f5dac683fe78e0ce0bca7ac04d047dc29a89cb2b5a06a87820f915c48d5f43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68ddb4b88b-9zbcx" Sep 4 20:29:09.536573 kubelet[2535]: E0904 20:29:09.533578 2535 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0f5dac683fe78e0ce0bca7ac04d047dc29a89cb2b5a06a87820f915c48d5f43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68ddb4b88b-9zbcx" Sep 4 20:29:09.536573 kubelet[2535]: E0904 20:29:09.536491 2535 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-68ddb4b88b-9zbcx_calico-system(d9d91078-829f-4383-9939-2801dd36abf2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-68ddb4b88b-9zbcx_calico-system(d9d91078-829f-4383-9939-2801dd36abf2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d0f5dac683fe78e0ce0bca7ac04d047dc29a89cb2b5a06a87820f915c48d5f43\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-68ddb4b88b-9zbcx" podUID="d9d91078-829f-4383-9939-2801dd36abf2" Sep 4 20:29:09.664034 systemd[1]: Created slice kubepods-besteffort-podc943a141_800d_4fd7_b526_a302d60b317a.slice - libcontainer container kubepods-besteffort-podc943a141_800d_4fd7_b526_a302d60b317a.slice. Sep 4 20:29:09.667495 containerd[1468]: time="2024-09-04T20:29:09.667405427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bsn66,Uid:c943a141-800d-4fd7-b526-a302d60b317a,Namespace:calico-system,Attempt:0,}" Sep 4 20:29:09.745202 containerd[1468]: time="2024-09-04T20:29:09.745092178Z" level=error msg="Failed to destroy network for sandbox \"5e9fc370fba866880d30ab3cbc0dd85fc160f9693431c6ea8da9398464435ae4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 20:29:09.745614 containerd[1468]: time="2024-09-04T20:29:09.745558585Z" level=error msg="encountered an error cleaning up failed sandbox \"5e9fc370fba866880d30ab3cbc0dd85fc160f9693431c6ea8da9398464435ae4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 20:29:09.745728 containerd[1468]: time="2024-09-04T20:29:09.745635751Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bsn66,Uid:c943a141-800d-4fd7-b526-a302d60b317a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5e9fc370fba866880d30ab3cbc0dd85fc160f9693431c6ea8da9398464435ae4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 20:29:09.746046 kubelet[2535]: E0904 20:29:09.745995 2535 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e9fc370fba866880d30ab3cbc0dd85fc160f9693431c6ea8da9398464435ae4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 20:29:09.746370 kubelet[2535]: E0904 20:29:09.746187 2535 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e9fc370fba866880d30ab3cbc0dd85fc160f9693431c6ea8da9398464435ae4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bsn66" Sep 4 20:29:09.746850 kubelet[2535]: E0904 20:29:09.746473 2535 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e9fc370fba866880d30ab3cbc0dd85fc160f9693431c6ea8da9398464435ae4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bsn66" Sep 4 20:29:09.746850 kubelet[2535]: E0904 20:29:09.746564 2535 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-bsn66_calico-system(c943a141-800d-4fd7-b526-a302d60b317a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-bsn66_calico-system(c943a141-800d-4fd7-b526-a302d60b317a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5e9fc370fba866880d30ab3cbc0dd85fc160f9693431c6ea8da9398464435ae4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bsn66" podUID="c943a141-800d-4fd7-b526-a302d60b317a" Sep 4 20:29:09.827165 kubelet[2535]: I0904 20:29:09.827035 2535 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0f5dac683fe78e0ce0bca7ac04d047dc29a89cb2b5a06a87820f915c48d5f43" Sep 4 20:29:09.832377 kubelet[2535]: I0904 20:29:09.831778 2535 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85bc2484df33423ca10209e10cfda548e62b4c2cf1825fff7e60e36b7c284d3f" Sep 4 20:29:09.838092 containerd[1468]: time="2024-09-04T20:29:09.835460908Z" level=info msg="StopPodSandbox for \"85bc2484df33423ca10209e10cfda548e62b4c2cf1825fff7e60e36b7c284d3f\"" Sep 4 20:29:09.838092 containerd[1468]: time="2024-09-04T20:29:09.835900887Z" level=info msg="Ensure that sandbox 85bc2484df33423ca10209e10cfda548e62b4c2cf1825fff7e60e36b7c284d3f in task-service has been cleanup successfully" Sep 4 20:29:09.838500 containerd[1468]: time="2024-09-04T20:29:09.838420031Z" level=info msg="StopPodSandbox for \"d0f5dac683fe78e0ce0bca7ac04d047dc29a89cb2b5a06a87820f915c48d5f43\"" Sep 4 20:29:09.839390 containerd[1468]: time="2024-09-04T20:29:09.839349464Z" level=info msg="Ensure that sandbox d0f5dac683fe78e0ce0bca7ac04d047dc29a89cb2b5a06a87820f915c48d5f43 in task-service has been cleanup successfully" Sep 4 20:29:09.842013 kubelet[2535]: I0904 20:29:09.841416 2535 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e880ca8912c3282bfb832b03f271edce6beab6749ee5279266d364e30b0db7c" Sep 4 20:29:09.842756 containerd[1468]: time="2024-09-04T20:29:09.842710628Z" level=info msg="StopPodSandbox for \"0e880ca8912c3282bfb832b03f271edce6beab6749ee5279266d364e30b0db7c\"" Sep 4 20:29:09.843113 containerd[1468]: time="2024-09-04T20:29:09.843075475Z" level=info msg="Ensure that sandbox 0e880ca8912c3282bfb832b03f271edce6beab6749ee5279266d364e30b0db7c in task-service has been cleanup successfully" Sep 4 20:29:09.847438 kubelet[2535]: I0904 20:29:09.847392 2535 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e9fc370fba866880d30ab3cbc0dd85fc160f9693431c6ea8da9398464435ae4" Sep 4 20:29:09.848076 containerd[1468]: time="2024-09-04T20:29:09.848019590Z" level=info msg="StopPodSandbox for \"5e9fc370fba866880d30ab3cbc0dd85fc160f9693431c6ea8da9398464435ae4\"" Sep 4 20:29:09.848368 containerd[1468]: time="2024-09-04T20:29:09.848333925Z" level=info msg="Ensure that sandbox 5e9fc370fba866880d30ab3cbc0dd85fc160f9693431c6ea8da9398464435ae4 in task-service has been cleanup successfully" Sep 4 20:29:09.924725 containerd[1468]: time="2024-09-04T20:29:09.924625043Z" level=error msg="StopPodSandbox for \"85bc2484df33423ca10209e10cfda548e62b4c2cf1825fff7e60e36b7c284d3f\" failed" error="failed to destroy network for sandbox \"85bc2484df33423ca10209e10cfda548e62b4c2cf1825fff7e60e36b7c284d3f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 20:29:09.968103 kubelet[2535]: E0904 20:29:09.967814 2535 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"85bc2484df33423ca10209e10cfda548e62b4c2cf1825fff7e60e36b7c284d3f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="85bc2484df33423ca10209e10cfda548e62b4c2cf1825fff7e60e36b7c284d3f" Sep 4 20:29:09.968103 kubelet[2535]: E0904 20:29:09.967912 2535 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"85bc2484df33423ca10209e10cfda548e62b4c2cf1825fff7e60e36b7c284d3f"} Sep 4 20:29:09.968103 kubelet[2535]: E0904 20:29:09.967998 2535 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cf38c19c-6a60-4a36-8a3a-1b512f9dcdfb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"85bc2484df33423ca10209e10cfda548e62b4c2cf1825fff7e60e36b7c284d3f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 20:29:09.968103 kubelet[2535]: E0904 20:29:09.968033 2535 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cf38c19c-6a60-4a36-8a3a-1b512f9dcdfb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"85bc2484df33423ca10209e10cfda548e62b4c2cf1825fff7e60e36b7c284d3f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-kfqv7" podUID="cf38c19c-6a60-4a36-8a3a-1b512f9dcdfb" Sep 4 20:29:09.980563 containerd[1468]: time="2024-09-04T20:29:09.980374072Z" level=error msg="StopPodSandbox for \"0e880ca8912c3282bfb832b03f271edce6beab6749ee5279266d364e30b0db7c\" failed" error="failed to destroy network for sandbox \"0e880ca8912c3282bfb832b03f271edce6beab6749ee5279266d364e30b0db7c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 20:29:09.980959 kubelet[2535]: E0904 20:29:09.980731 2535 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0e880ca8912c3282bfb832b03f271edce6beab6749ee5279266d364e30b0db7c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0e880ca8912c3282bfb832b03f271edce6beab6749ee5279266d364e30b0db7c" Sep 4 20:29:09.980959 kubelet[2535]: E0904 20:29:09.980807 2535 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0e880ca8912c3282bfb832b03f271edce6beab6749ee5279266d364e30b0db7c"} Sep 4 20:29:09.980959 kubelet[2535]: E0904 20:29:09.980858 2535 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b99452b1-3cd1-4b03-b372-8f4d1888f969\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0e880ca8912c3282bfb832b03f271edce6beab6749ee5279266d364e30b0db7c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 20:29:09.980959 kubelet[2535]: E0904 20:29:09.980891 2535 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b99452b1-3cd1-4b03-b372-8f4d1888f969\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0e880ca8912c3282bfb832b03f271edce6beab6749ee5279266d364e30b0db7c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-lw7kj" podUID="b99452b1-3cd1-4b03-b372-8f4d1888f969" Sep 4 20:29:09.984980 containerd[1468]: time="2024-09-04T20:29:09.984772674Z" level=error msg="StopPodSandbox for \"5e9fc370fba866880d30ab3cbc0dd85fc160f9693431c6ea8da9398464435ae4\" failed" error="failed to destroy network for sandbox \"5e9fc370fba866880d30ab3cbc0dd85fc160f9693431c6ea8da9398464435ae4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 20:29:09.985137 containerd[1468]: time="2024-09-04T20:29:09.985060276Z" level=error msg="StopPodSandbox for \"d0f5dac683fe78e0ce0bca7ac04d047dc29a89cb2b5a06a87820f915c48d5f43\" failed" error="failed to destroy network for sandbox \"d0f5dac683fe78e0ce0bca7ac04d047dc29a89cb2b5a06a87820f915c48d5f43\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 20:29:09.985584 kubelet[2535]: E0904 20:29:09.985390 2535 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d0f5dac683fe78e0ce0bca7ac04d047dc29a89cb2b5a06a87820f915c48d5f43\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d0f5dac683fe78e0ce0bca7ac04d047dc29a89cb2b5a06a87820f915c48d5f43" Sep 4 20:29:09.985584 kubelet[2535]: E0904 20:29:09.985479 2535 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d0f5dac683fe78e0ce0bca7ac04d047dc29a89cb2b5a06a87820f915c48d5f43"} Sep 4 20:29:09.985584 kubelet[2535]: E0904 20:29:09.985519 2535 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d9d91078-829f-4383-9939-2801dd36abf2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d0f5dac683fe78e0ce0bca7ac04d047dc29a89cb2b5a06a87820f915c48d5f43\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 20:29:09.985584 kubelet[2535]: E0904 20:29:09.985542 2535 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d9d91078-829f-4383-9939-2801dd36abf2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d0f5dac683fe78e0ce0bca7ac04d047dc29a89cb2b5a06a87820f915c48d5f43\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-68ddb4b88b-9zbcx" podUID="d9d91078-829f-4383-9939-2801dd36abf2" Sep 4 20:29:09.985870 kubelet[2535]: E0904 20:29:09.985415 2535 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5e9fc370fba866880d30ab3cbc0dd85fc160f9693431c6ea8da9398464435ae4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5e9fc370fba866880d30ab3cbc0dd85fc160f9693431c6ea8da9398464435ae4" Sep 4 20:29:09.985870 kubelet[2535]: E0904 20:29:09.985609 2535 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5e9fc370fba866880d30ab3cbc0dd85fc160f9693431c6ea8da9398464435ae4"} Sep 4 20:29:09.985870 kubelet[2535]: E0904 20:29:09.985656 2535 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c943a141-800d-4fd7-b526-a302d60b317a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5e9fc370fba866880d30ab3cbc0dd85fc160f9693431c6ea8da9398464435ae4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 20:29:09.985870 kubelet[2535]: E0904 20:29:09.985690 2535 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c943a141-800d-4fd7-b526-a302d60b317a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5e9fc370fba866880d30ab3cbc0dd85fc160f9693431c6ea8da9398464435ae4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bsn66" podUID="c943a141-800d-4fd7-b526-a302d60b317a" Sep 4 20:29:10.065275 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d0f5dac683fe78e0ce0bca7ac04d047dc29a89cb2b5a06a87820f915c48d5f43-shm.mount: Deactivated successfully. Sep 4 20:29:10.065442 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-85bc2484df33423ca10209e10cfda548e62b4c2cf1825fff7e60e36b7c284d3f-shm.mount: Deactivated successfully. Sep 4 20:29:10.065532 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0e880ca8912c3282bfb832b03f271edce6beab6749ee5279266d364e30b0db7c-shm.mount: Deactivated successfully. Sep 4 20:29:14.639463 kubelet[2535]: I0904 20:29:14.639385 2535 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 20:29:14.640557 kubelet[2535]: E0904 20:29:14.640510 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 4 20:29:14.870802 kubelet[2535]: E0904 20:29:14.870532 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 4 20:29:16.650096 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2110784312.mount: Deactivated successfully. Sep 4 20:29:16.698432 containerd[1468]: time="2024-09-04T20:29:16.698284050Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:29:16.700414 containerd[1468]: time="2024-09-04T20:29:16.700318012Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.1: active requests=0, bytes read=117873564" Sep 4 20:29:16.701979 containerd[1468]: time="2024-09-04T20:29:16.700945248Z" level=info msg="ImageCreate event name:\"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:29:16.703746 containerd[1468]: time="2024-09-04T20:29:16.703690549Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:29:16.705010 containerd[1468]: time="2024-09-04T20:29:16.704705071Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.1\" with image id \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\", size \"117873426\" in 7.873575977s" Sep 4 20:29:16.705167 containerd[1468]: time="2024-09-04T20:29:16.705150253Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\" returns image reference \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\"" Sep 4 20:29:16.747804 containerd[1468]: time="2024-09-04T20:29:16.747651915Z" level=info msg="CreateContainer within sandbox \"71f60ac97b5c6a245b9f0ac4c429219a25b7bedb187d703119bfd6aab1d5b280\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 4 20:29:16.808105 containerd[1468]: time="2024-09-04T20:29:16.808010461Z" level=info msg="CreateContainer within sandbox \"71f60ac97b5c6a245b9f0ac4c429219a25b7bedb187d703119bfd6aab1d5b280\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"3f66d9ce48164a234189fbcc8b6aa87beed6223230f8e381b05ab5a48f42454e\"" Sep 4 20:29:16.811282 containerd[1468]: time="2024-09-04T20:29:16.810108461Z" level=info msg="StartContainer for \"3f66d9ce48164a234189fbcc8b6aa87beed6223230f8e381b05ab5a48f42454e\"" Sep 4 20:29:16.976641 systemd[1]: Started cri-containerd-3f66d9ce48164a234189fbcc8b6aa87beed6223230f8e381b05ab5a48f42454e.scope - libcontainer container 3f66d9ce48164a234189fbcc8b6aa87beed6223230f8e381b05ab5a48f42454e. Sep 4 20:29:17.079779 containerd[1468]: time="2024-09-04T20:29:17.079632700Z" level=info msg="StartContainer for \"3f66d9ce48164a234189fbcc8b6aa87beed6223230f8e381b05ab5a48f42454e\" returns successfully" Sep 4 20:29:17.211437 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 4 20:29:17.213538 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 4 20:29:17.319753 systemd[1]: Started sshd@7-64.23.130.28:22-139.178.68.195:43856.service - OpenSSH per-connection server daemon (139.178.68.195:43856). Sep 4 20:29:17.549517 sshd[3617]: Accepted publickey for core from 139.178.68.195 port 43856 ssh2: RSA SHA256:6m86ErQYPfwi49NZRVftW/USO9k3FxgPtHd71f+HMpY Sep 4 20:29:17.554615 sshd[3617]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 20:29:17.567128 systemd-logind[1446]: New session 8 of user core. Sep 4 20:29:17.575063 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 4 20:29:17.875324 sshd[3617]: pam_unix(sshd:session): session closed for user core Sep 4 20:29:17.883326 systemd[1]: sshd@7-64.23.130.28:22-139.178.68.195:43856.service: Deactivated successfully. Sep 4 20:29:17.888456 systemd[1]: session-8.scope: Deactivated successfully. Sep 4 20:29:17.895755 systemd-logind[1446]: Session 8 logged out. Waiting for processes to exit. Sep 4 20:29:17.899791 systemd-logind[1446]: Removed session 8. Sep 4 20:29:17.915546 kubelet[2535]: E0904 20:29:17.915502 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 4 20:29:17.979967 kubelet[2535]: I0904 20:29:17.954371 2535 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-q88qq" podStartSLOduration=2.137857042 podStartE2EDuration="20.939533693s" podCreationTimestamp="2024-09-04 20:28:57 +0000 UTC" firstStartedPulling="2024-09-04 20:28:57.904538878 +0000 UTC m=+22.375107686" lastFinishedPulling="2024-09-04 20:29:16.706215547 +0000 UTC m=+41.176784337" observedRunningTime="2024-09-04 20:29:17.938495698 +0000 UTC m=+42.409064554" watchObservedRunningTime="2024-09-04 20:29:17.939533693 +0000 UTC m=+42.410102503" Sep 4 20:29:18.916215 kubelet[2535]: I0904 20:29:18.916163 2535 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 20:29:18.917465 kubelet[2535]: E0904 20:29:18.917432 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 4 20:29:19.526279 kernel: bpftool[3768]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 4 20:29:19.833092 systemd-networkd[1369]: vxlan.calico: Link UP Sep 4 20:29:19.833102 systemd-networkd[1369]: vxlan.calico: Gained carrier Sep 4 20:29:20.659442 containerd[1468]: time="2024-09-04T20:29:20.658818676Z" level=info msg="StopPodSandbox for \"0e880ca8912c3282bfb832b03f271edce6beab6749ee5279266d364e30b0db7c\"" Sep 4 20:29:20.922518 kubelet[2535]: I0904 20:29:20.921442 2535 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 20:29:20.951544 containerd[1468]: 2024-09-04 20:29:20.733 [INFO][3852] k8s.go 608: Cleaning up netns ContainerID="0e880ca8912c3282bfb832b03f271edce6beab6749ee5279266d364e30b0db7c" Sep 4 20:29:20.951544 containerd[1468]: 2024-09-04 20:29:20.734 [INFO][3852] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="0e880ca8912c3282bfb832b03f271edce6beab6749ee5279266d364e30b0db7c" iface="eth0" netns="/var/run/netns/cni-a5410352-701a-f741-0ffd-359ae829b3d2" Sep 4 20:29:20.951544 containerd[1468]: 2024-09-04 20:29:20.734 [INFO][3852] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="0e880ca8912c3282bfb832b03f271edce6beab6749ee5279266d364e30b0db7c" iface="eth0" netns="/var/run/netns/cni-a5410352-701a-f741-0ffd-359ae829b3d2" Sep 4 20:29:20.951544 containerd[1468]: 2024-09-04 20:29:20.735 [INFO][3852] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="0e880ca8912c3282bfb832b03f271edce6beab6749ee5279266d364e30b0db7c" iface="eth0" netns="/var/run/netns/cni-a5410352-701a-f741-0ffd-359ae829b3d2" Sep 4 20:29:20.951544 containerd[1468]: 2024-09-04 20:29:20.735 [INFO][3852] k8s.go 615: Releasing IP address(es) ContainerID="0e880ca8912c3282bfb832b03f271edce6beab6749ee5279266d364e30b0db7c" Sep 4 20:29:20.951544 containerd[1468]: 2024-09-04 20:29:20.735 [INFO][3852] utils.go 188: Calico CNI releasing IP address ContainerID="0e880ca8912c3282bfb832b03f271edce6beab6749ee5279266d364e30b0db7c" Sep 4 20:29:20.951544 containerd[1468]: 2024-09-04 20:29:20.900 [INFO][3858] ipam_plugin.go 417: Releasing address using handleID ContainerID="0e880ca8912c3282bfb832b03f271edce6beab6749ee5279266d364e30b0db7c" HandleID="k8s-pod-network.0e880ca8912c3282bfb832b03f271edce6beab6749ee5279266d364e30b0db7c" Workload="ci--3975.2.1--0--09c0a9ae8e-k8s-coredns--7db6d8ff4d--lw7kj-eth0" Sep 4 20:29:20.951544 containerd[1468]: 2024-09-04 20:29:20.903 [INFO][3858] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 20:29:20.951544 containerd[1468]: 2024-09-04 20:29:20.903 [INFO][3858] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 20:29:20.951544 containerd[1468]: 2024-09-04 20:29:20.919 [WARNING][3858] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="0e880ca8912c3282bfb832b03f271edce6beab6749ee5279266d364e30b0db7c" HandleID="k8s-pod-network.0e880ca8912c3282bfb832b03f271edce6beab6749ee5279266d364e30b0db7c" Workload="ci--3975.2.1--0--09c0a9ae8e-k8s-coredns--7db6d8ff4d--lw7kj-eth0" Sep 4 20:29:20.951544 containerd[1468]: 2024-09-04 20:29:20.920 [INFO][3858] ipam_plugin.go 445: Releasing address using workloadID ContainerID="0e880ca8912c3282bfb832b03f271edce6beab6749ee5279266d364e30b0db7c" HandleID="k8s-pod-network.0e880ca8912c3282bfb832b03f271edce6beab6749ee5279266d364e30b0db7c" Workload="ci--3975.2.1--0--09c0a9ae8e-k8s-coredns--7db6d8ff4d--lw7kj-eth0" Sep 4 20:29:20.951544 containerd[1468]: 2024-09-04 20:29:20.929 [INFO][3858] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 20:29:20.951544 containerd[1468]: 2024-09-04 20:29:20.935 [INFO][3852] k8s.go 621: Teardown processing complete. ContainerID="0e880ca8912c3282bfb832b03f271edce6beab6749ee5279266d364e30b0db7c" Sep 4 20:29:20.963527 containerd[1468]: time="2024-09-04T20:29:20.959099397Z" level=info msg="TearDown network for sandbox \"0e880ca8912c3282bfb832b03f271edce6beab6749ee5279266d364e30b0db7c\" successfully" Sep 4 20:29:20.963527 containerd[1468]: time="2024-09-04T20:29:20.959163056Z" level=info msg="StopPodSandbox for \"0e880ca8912c3282bfb832b03f271edce6beab6749ee5279266d364e30b0db7c\" returns successfully" Sep 4 20:29:20.962123 systemd[1]: run-netns-cni\x2da5410352\x2d701a\x2df741\x2d0ffd\x2d359ae829b3d2.mount: Deactivated successfully. Sep 4 20:29:20.988193 kubelet[2535]: E0904 20:29:20.987489 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 4 20:29:21.041159 kubelet[2535]: E0904 20:29:21.040211 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 4 20:29:21.043557 containerd[1468]: time="2024-09-04T20:29:21.043039877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-lw7kj,Uid:b99452b1-3cd1-4b03-b372-8f4d1888f969,Namespace:kube-system,Attempt:1,}" Sep 4 20:29:21.446731 systemd-networkd[1369]: cali7c03af6ab19: Link UP Sep 4 20:29:21.448802 systemd-networkd[1369]: cali7c03af6ab19: Gained carrier Sep 4 20:29:21.498378 containerd[1468]: 2024-09-04 20:29:21.185 [INFO][3865] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.2.1--0--09c0a9ae8e-k8s-coredns--7db6d8ff4d--lw7kj-eth0 coredns-7db6d8ff4d- kube-system b99452b1-3cd1-4b03-b372-8f4d1888f969 750 0 2024-09-04 20:28:51 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3975.2.1-0-09c0a9ae8e coredns-7db6d8ff4d-lw7kj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7c03af6ab19 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="3e8cdeee66af76e9d46d4eef7ad508e0040679b87c8f2204246edac50ed26f7c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lw7kj" WorkloadEndpoint="ci--3975.2.1--0--09c0a9ae8e-k8s-coredns--7db6d8ff4d--lw7kj-" Sep 4 20:29:21.498378 containerd[1468]: 2024-09-04 20:29:21.185 [INFO][3865] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3e8cdeee66af76e9d46d4eef7ad508e0040679b87c8f2204246edac50ed26f7c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lw7kj" WorkloadEndpoint="ci--3975.2.1--0--09c0a9ae8e-k8s-coredns--7db6d8ff4d--lw7kj-eth0" Sep 4 20:29:21.498378 containerd[1468]: 2024-09-04 20:29:21.244 [INFO][3898] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3e8cdeee66af76e9d46d4eef7ad508e0040679b87c8f2204246edac50ed26f7c" HandleID="k8s-pod-network.3e8cdeee66af76e9d46d4eef7ad508e0040679b87c8f2204246edac50ed26f7c" Workload="ci--3975.2.1--0--09c0a9ae8e-k8s-coredns--7db6d8ff4d--lw7kj-eth0" Sep 4 20:29:21.498378 containerd[1468]: 2024-09-04 20:29:21.280 [INFO][3898] ipam_plugin.go 270: Auto assigning IP ContainerID="3e8cdeee66af76e9d46d4eef7ad508e0040679b87c8f2204246edac50ed26f7c" HandleID="k8s-pod-network.3e8cdeee66af76e9d46d4eef7ad508e0040679b87c8f2204246edac50ed26f7c" Workload="ci--3975.2.1--0--09c0a9ae8e-k8s-coredns--7db6d8ff4d--lw7kj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031ae60), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3975.2.1-0-09c0a9ae8e", "pod":"coredns-7db6d8ff4d-lw7kj", "timestamp":"2024-09-04 20:29:21.244884799 +0000 UTC"}, Hostname:"ci-3975.2.1-0-09c0a9ae8e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 20:29:21.498378 containerd[1468]: 2024-09-04 20:29:21.280 [INFO][3898] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 20:29:21.498378 containerd[1468]: 2024-09-04 20:29:21.281 [INFO][3898] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 20:29:21.498378 containerd[1468]: 2024-09-04 20:29:21.281 [INFO][3898] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.2.1-0-09c0a9ae8e' Sep 4 20:29:21.498378 containerd[1468]: 2024-09-04 20:29:21.291 [INFO][3898] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3e8cdeee66af76e9d46d4eef7ad508e0040679b87c8f2204246edac50ed26f7c" host="ci-3975.2.1-0-09c0a9ae8e" Sep 4 20:29:21.498378 containerd[1468]: 2024-09-04 20:29:21.317 [INFO][3898] ipam.go 372: Looking up existing affinities for host host="ci-3975.2.1-0-09c0a9ae8e" Sep 4 20:29:21.498378 containerd[1468]: 2024-09-04 20:29:21.353 [INFO][3898] ipam.go 489: Trying affinity for 192.168.26.192/26 host="ci-3975.2.1-0-09c0a9ae8e" Sep 4 20:29:21.498378 containerd[1468]: 2024-09-04 20:29:21.363 [INFO][3898] ipam.go 155: Attempting to load block cidr=192.168.26.192/26 host="ci-3975.2.1-0-09c0a9ae8e" Sep 4 20:29:21.498378 containerd[1468]: 2024-09-04 20:29:21.377 [INFO][3898] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.26.192/26 host="ci-3975.2.1-0-09c0a9ae8e" Sep 4 20:29:21.498378 containerd[1468]: 2024-09-04 20:29:21.377 [INFO][3898] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.26.192/26 handle="k8s-pod-network.3e8cdeee66af76e9d46d4eef7ad508e0040679b87c8f2204246edac50ed26f7c" host="ci-3975.2.1-0-09c0a9ae8e" Sep 4 20:29:21.498378 containerd[1468]: 2024-09-04 20:29:21.391 [INFO][3898] ipam.go 1685: Creating new handle: k8s-pod-network.3e8cdeee66af76e9d46d4eef7ad508e0040679b87c8f2204246edac50ed26f7c Sep 4 20:29:21.498378 containerd[1468]: 2024-09-04 20:29:21.407 [INFO][3898] ipam.go 1203: Writing block in order to claim IPs block=192.168.26.192/26 handle="k8s-pod-network.3e8cdeee66af76e9d46d4eef7ad508e0040679b87c8f2204246edac50ed26f7c" host="ci-3975.2.1-0-09c0a9ae8e" Sep 4 20:29:21.498378 containerd[1468]: 2024-09-04 20:29:21.427 [INFO][3898] ipam.go 1216: Successfully claimed IPs: [192.168.26.193/26] block=192.168.26.192/26 handle="k8s-pod-network.3e8cdeee66af76e9d46d4eef7ad508e0040679b87c8f2204246edac50ed26f7c" host="ci-3975.2.1-0-09c0a9ae8e" Sep 4 20:29:21.498378 containerd[1468]: 2024-09-04 20:29:21.427 [INFO][3898] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.26.193/26] handle="k8s-pod-network.3e8cdeee66af76e9d46d4eef7ad508e0040679b87c8f2204246edac50ed26f7c" host="ci-3975.2.1-0-09c0a9ae8e" Sep 4 20:29:21.498378 containerd[1468]: 2024-09-04 20:29:21.427 [INFO][3898] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 20:29:21.498378 containerd[1468]: 2024-09-04 20:29:21.427 [INFO][3898] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.26.193/26] IPv6=[] ContainerID="3e8cdeee66af76e9d46d4eef7ad508e0040679b87c8f2204246edac50ed26f7c" HandleID="k8s-pod-network.3e8cdeee66af76e9d46d4eef7ad508e0040679b87c8f2204246edac50ed26f7c" Workload="ci--3975.2.1--0--09c0a9ae8e-k8s-coredns--7db6d8ff4d--lw7kj-eth0" Sep 4 20:29:21.502151 containerd[1468]: 2024-09-04 20:29:21.434 [INFO][3865] k8s.go 386: Populated endpoint ContainerID="3e8cdeee66af76e9d46d4eef7ad508e0040679b87c8f2204246edac50ed26f7c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lw7kj" WorkloadEndpoint="ci--3975.2.1--0--09c0a9ae8e-k8s-coredns--7db6d8ff4d--lw7kj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--0--09c0a9ae8e-k8s-coredns--7db6d8ff4d--lw7kj-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"b99452b1-3cd1-4b03-b372-8f4d1888f969", ResourceVersion:"750", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 20, 28, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-0-09c0a9ae8e", ContainerID:"", Pod:"coredns-7db6d8ff4d-lw7kj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.26.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7c03af6ab19", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 20:29:21.502151 containerd[1468]: 2024-09-04 20:29:21.435 [INFO][3865] k8s.go 387: Calico CNI using IPs: [192.168.26.193/32] ContainerID="3e8cdeee66af76e9d46d4eef7ad508e0040679b87c8f2204246edac50ed26f7c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lw7kj" WorkloadEndpoint="ci--3975.2.1--0--09c0a9ae8e-k8s-coredns--7db6d8ff4d--lw7kj-eth0" Sep 4 20:29:21.502151 containerd[1468]: 2024-09-04 20:29:21.435 [INFO][3865] dataplane_linux.go 68: Setting the host side veth name to cali7c03af6ab19 ContainerID="3e8cdeee66af76e9d46d4eef7ad508e0040679b87c8f2204246edac50ed26f7c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lw7kj" WorkloadEndpoint="ci--3975.2.1--0--09c0a9ae8e-k8s-coredns--7db6d8ff4d--lw7kj-eth0" Sep 4 20:29:21.502151 containerd[1468]: 2024-09-04 20:29:21.445 [INFO][3865] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="3e8cdeee66af76e9d46d4eef7ad508e0040679b87c8f2204246edac50ed26f7c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lw7kj" WorkloadEndpoint="ci--3975.2.1--0--09c0a9ae8e-k8s-coredns--7db6d8ff4d--lw7kj-eth0" Sep 4 20:29:21.502151 containerd[1468]: 2024-09-04 20:29:21.448 [INFO][3865] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3e8cdeee66af76e9d46d4eef7ad508e0040679b87c8f2204246edac50ed26f7c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lw7kj" WorkloadEndpoint="ci--3975.2.1--0--09c0a9ae8e-k8s-coredns--7db6d8ff4d--lw7kj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--0--09c0a9ae8e-k8s-coredns--7db6d8ff4d--lw7kj-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"b99452b1-3cd1-4b03-b372-8f4d1888f969", ResourceVersion:"750", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 20, 28, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-0-09c0a9ae8e", ContainerID:"3e8cdeee66af76e9d46d4eef7ad508e0040679b87c8f2204246edac50ed26f7c", Pod:"coredns-7db6d8ff4d-lw7kj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.26.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7c03af6ab19", MAC:"5a:7e:b4:1e:97:13", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 20:29:21.502151 containerd[1468]: 2024-09-04 20:29:21.492 [INFO][3865] k8s.go 500: Wrote updated endpoint to datastore ContainerID="3e8cdeee66af76e9d46d4eef7ad508e0040679b87c8f2204246edac50ed26f7c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lw7kj" WorkloadEndpoint="ci--3975.2.1--0--09c0a9ae8e-k8s-coredns--7db6d8ff4d--lw7kj-eth0" Sep 4 20:29:21.582074 containerd[1468]: time="2024-09-04T20:29:21.581079488Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 20:29:21.582074 containerd[1468]: time="2024-09-04T20:29:21.581186452Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 20:29:21.582074 containerd[1468]: time="2024-09-04T20:29:21.581215434Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 20:29:21.582074 containerd[1468]: time="2024-09-04T20:29:21.581410259Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 20:29:21.629762 systemd[1]: Started cri-containerd-3e8cdeee66af76e9d46d4eef7ad508e0040679b87c8f2204246edac50ed26f7c.scope - libcontainer container 3e8cdeee66af76e9d46d4eef7ad508e0040679b87c8f2204246edac50ed26f7c. Sep 4 20:29:21.781590 containerd[1468]: time="2024-09-04T20:29:21.781461184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-lw7kj,Uid:b99452b1-3cd1-4b03-b372-8f4d1888f969,Namespace:kube-system,Attempt:1,} returns sandbox id \"3e8cdeee66af76e9d46d4eef7ad508e0040679b87c8f2204246edac50ed26f7c\"" Sep 4 20:29:21.785253 kubelet[2535]: E0904 20:29:21.784860 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 4 20:29:21.816171 containerd[1468]: time="2024-09-04T20:29:21.815967023Z" level=info msg="CreateContainer within sandbox \"3e8cdeee66af76e9d46d4eef7ad508e0040679b87c8f2204246edac50ed26f7c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 20:29:21.851746 containerd[1468]: time="2024-09-04T20:29:21.851667994Z" level=info msg="CreateContainer within sandbox \"3e8cdeee66af76e9d46d4eef7ad508e0040679b87c8f2204246edac50ed26f7c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b60de57300e47f04181fe509ed6878ec4fad6e6e922b8e121361df5b93ee5458\"" Sep 4 20:29:21.853910 containerd[1468]: time="2024-09-04T20:29:21.852677241Z" level=info msg="StartContainer for \"b60de57300e47f04181fe509ed6878ec4fad6e6e922b8e121361df5b93ee5458\"" Sep 4 20:29:21.885542 systemd-networkd[1369]: vxlan.calico: Gained IPv6LL Sep 4 20:29:21.917385 systemd[1]: Started cri-containerd-b60de57300e47f04181fe509ed6878ec4fad6e6e922b8e121361df5b93ee5458.scope - libcontainer container b60de57300e47f04181fe509ed6878ec4fad6e6e922b8e121361df5b93ee5458. Sep 4 20:29:21.986691 containerd[1468]: time="2024-09-04T20:29:21.986489410Z" level=info msg="StartContainer for \"b60de57300e47f04181fe509ed6878ec4fad6e6e922b8e121361df5b93ee5458\" returns successfully" Sep 4 20:29:22.056392 kubelet[2535]: E0904 20:29:22.055130 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 4 20:29:22.057099 kubelet[2535]: E0904 20:29:22.056205 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 4 20:29:22.657425 containerd[1468]: time="2024-09-04T20:29:22.657346315Z" level=info msg="StopPodSandbox for \"85bc2484df33423ca10209e10cfda548e62b4c2cf1825fff7e60e36b7c284d3f\"" Sep 4 20:29:22.726909 kubelet[2535]: I0904 20:29:22.726799 2535 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-lw7kj" podStartSLOduration=31.726774843 podStartE2EDuration="31.726774843s" podCreationTimestamp="2024-09-04 20:28:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 20:29:22.08379207 +0000 UTC m=+46.554360879" watchObservedRunningTime="2024-09-04 20:29:22.726774843 +0000 UTC m=+47.197343648" Sep 4 20:29:22.781508 containerd[1468]: 2024-09-04 20:29:22.728 [INFO][4034] k8s.go 608: Cleaning up netns ContainerID="85bc2484df33423ca10209e10cfda548e62b4c2cf1825fff7e60e36b7c284d3f" Sep 4 20:29:22.781508 containerd[1468]: 2024-09-04 20:29:22.729 [INFO][4034] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="85bc2484df33423ca10209e10cfda548e62b4c2cf1825fff7e60e36b7c284d3f" iface="eth0" netns="/var/run/netns/cni-0de332fa-9f3f-376e-ec34-a3df31a142ed" Sep 4 20:29:22.781508 containerd[1468]: 2024-09-04 20:29:22.730 [INFO][4034] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="85bc2484df33423ca10209e10cfda548e62b4c2cf1825fff7e60e36b7c284d3f" iface="eth0" netns="/var/run/netns/cni-0de332fa-9f3f-376e-ec34-a3df31a142ed" Sep 4 20:29:22.781508 containerd[1468]: 2024-09-04 20:29:22.730 [INFO][4034] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="85bc2484df33423ca10209e10cfda548e62b4c2cf1825fff7e60e36b7c284d3f" iface="eth0" netns="/var/run/netns/cni-0de332fa-9f3f-376e-ec34-a3df31a142ed" Sep 4 20:29:22.781508 containerd[1468]: 2024-09-04 20:29:22.730 [INFO][4034] k8s.go 615: Releasing IP address(es) ContainerID="85bc2484df33423ca10209e10cfda548e62b4c2cf1825fff7e60e36b7c284d3f" Sep 4 20:29:22.781508 containerd[1468]: 2024-09-04 20:29:22.730 [INFO][4034] utils.go 188: Calico CNI releasing IP address ContainerID="85bc2484df33423ca10209e10cfda548e62b4c2cf1825fff7e60e36b7c284d3f" Sep 4 20:29:22.781508 containerd[1468]: 2024-09-04 20:29:22.764 [INFO][4040] ipam_plugin.go 417: Releasing address using handleID ContainerID="85bc2484df33423ca10209e10cfda548e62b4c2cf1825fff7e60e36b7c284d3f" HandleID="k8s-pod-network.85bc2484df33423ca10209e10cfda548e62b4c2cf1825fff7e60e36b7c284d3f" Workload="ci--3975.2.1--0--09c0a9ae8e-k8s-coredns--7db6d8ff4d--kfqv7-eth0" Sep 4 20:29:22.781508 containerd[1468]: 2024-09-04 20:29:22.764 [INFO][4040] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 20:29:22.781508 containerd[1468]: 2024-09-04 20:29:22.764 [INFO][4040] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 20:29:22.781508 containerd[1468]: 2024-09-04 20:29:22.773 [WARNING][4040] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="85bc2484df33423ca10209e10cfda548e62b4c2cf1825fff7e60e36b7c284d3f" HandleID="k8s-pod-network.85bc2484df33423ca10209e10cfda548e62b4c2cf1825fff7e60e36b7c284d3f" Workload="ci--3975.2.1--0--09c0a9ae8e-k8s-coredns--7db6d8ff4d--kfqv7-eth0" Sep 4 20:29:22.781508 containerd[1468]: 2024-09-04 20:29:22.773 [INFO][4040] ipam_plugin.go 445: Releasing address using workloadID ContainerID="85bc2484df33423ca10209e10cfda548e62b4c2cf1825fff7e60e36b7c284d3f" HandleID="k8s-pod-network.85bc2484df33423ca10209e10cfda548e62b4c2cf1825fff7e60e36b7c284d3f" Workload="ci--3975.2.1--0--09c0a9ae8e-k8s-coredns--7db6d8ff4d--kfqv7-eth0" Sep 4 20:29:22.781508 containerd[1468]: 2024-09-04 20:29:22.776 [INFO][4040] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 20:29:22.781508 containerd[1468]: 2024-09-04 20:29:22.779 [INFO][4034] k8s.go 621: Teardown processing complete. ContainerID="85bc2484df33423ca10209e10cfda548e62b4c2cf1825fff7e60e36b7c284d3f" Sep 4 20:29:22.788044 containerd[1468]: time="2024-09-04T20:29:22.781606917Z" level=info msg="TearDown network for sandbox \"85bc2484df33423ca10209e10cfda548e62b4c2cf1825fff7e60e36b7c284d3f\" successfully" Sep 4 20:29:22.788044 containerd[1468]: time="2024-09-04T20:29:22.781637729Z" level=info msg="StopPodSandbox for \"85bc2484df33423ca10209e10cfda548e62b4c2cf1825fff7e60e36b7c284d3f\" returns successfully" Sep 4 20:29:22.788044 containerd[1468]: time="2024-09-04T20:29:22.786037480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-kfqv7,Uid:cf38c19c-6a60-4a36-8a3a-1b512f9dcdfb,Namespace:kube-system,Attempt:1,}" Sep 4 20:29:22.787933 systemd[1]: run-netns-cni\x2d0de332fa\x2d9f3f\x2d376e\x2dec34\x2da3df31a142ed.mount: Deactivated successfully. Sep 4 20:29:22.791708 kubelet[2535]: E0904 20:29:22.784596 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 4 20:29:22.903741 systemd[1]: Started sshd@8-64.23.130.28:22-139.178.68.195:43872.service - OpenSSH per-connection server daemon (139.178.68.195:43872). Sep 4 20:29:23.022587 sshd[4061]: Accepted publickey for core from 139.178.68.195 port 43872 ssh2: RSA SHA256:6m86ErQYPfwi49NZRVftW/USO9k3FxgPtHd71f+HMpY Sep 4 20:29:23.028937 sshd[4061]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 20:29:23.039174 systemd-networkd[1369]: caliba970677edf: Link UP Sep 4 20:29:23.039571 systemd-networkd[1369]: caliba970677edf: Gained carrier Sep 4 20:29:23.063346 systemd-logind[1446]: New session 9 of user core. Sep 4 20:29:23.067657 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 4 20:29:23.075164 kubelet[2535]: E0904 20:29:23.075127 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 4 20:29:23.084428 containerd[1468]: 2024-09-04 20:29:22.884 [INFO][4047] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.2.1--0--09c0a9ae8e-k8s-coredns--7db6d8ff4d--kfqv7-eth0 coredns-7db6d8ff4d- kube-system cf38c19c-6a60-4a36-8a3a-1b512f9dcdfb 779 0 2024-09-04 20:28:51 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3975.2.1-0-09c0a9ae8e coredns-7db6d8ff4d-kfqv7 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliba970677edf [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="e6c728440ffadbbcc13ad89881572b784932596651904e06309a67282984ee3c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-kfqv7" WorkloadEndpoint="ci--3975.2.1--0--09c0a9ae8e-k8s-coredns--7db6d8ff4d--kfqv7-" Sep 4 20:29:23.084428 containerd[1468]: 2024-09-04 20:29:22.884 [INFO][4047] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e6c728440ffadbbcc13ad89881572b784932596651904e06309a67282984ee3c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-kfqv7" WorkloadEndpoint="ci--3975.2.1--0--09c0a9ae8e-k8s-coredns--7db6d8ff4d--kfqv7-eth0" Sep 4 20:29:23.084428 containerd[1468]: 2024-09-04 20:29:22.946 [INFO][4062] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e6c728440ffadbbcc13ad89881572b784932596651904e06309a67282984ee3c" HandleID="k8s-pod-network.e6c728440ffadbbcc13ad89881572b784932596651904e06309a67282984ee3c" Workload="ci--3975.2.1--0--09c0a9ae8e-k8s-coredns--7db6d8ff4d--kfqv7-eth0" Sep 4 20:29:23.084428 containerd[1468]: 2024-09-04 20:29:22.962 [INFO][4062] ipam_plugin.go 270: Auto assigning IP ContainerID="e6c728440ffadbbcc13ad89881572b784932596651904e06309a67282984ee3c" HandleID="k8s-pod-network.e6c728440ffadbbcc13ad89881572b784932596651904e06309a67282984ee3c" Workload="ci--3975.2.1--0--09c0a9ae8e-k8s-coredns--7db6d8ff4d--kfqv7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003183f0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3975.2.1-0-09c0a9ae8e", "pod":"coredns-7db6d8ff4d-kfqv7", "timestamp":"2024-09-04 20:29:22.946655714 +0000 UTC"}, Hostname:"ci-3975.2.1-0-09c0a9ae8e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 20:29:23.084428 containerd[1468]: 2024-09-04 20:29:22.962 [INFO][4062] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 20:29:23.084428 containerd[1468]: 2024-09-04 20:29:22.962 [INFO][4062] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 20:29:23.084428 containerd[1468]: 2024-09-04 20:29:22.962 [INFO][4062] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.2.1-0-09c0a9ae8e' Sep 4 20:29:23.084428 containerd[1468]: 2024-09-04 20:29:22.965 [INFO][4062] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e6c728440ffadbbcc13ad89881572b784932596651904e06309a67282984ee3c" host="ci-3975.2.1-0-09c0a9ae8e" Sep 4 20:29:23.084428 containerd[1468]: 2024-09-04 20:29:22.973 [INFO][4062] ipam.go 372: Looking up existing affinities for host host="ci-3975.2.1-0-09c0a9ae8e" Sep 4 20:29:23.084428 containerd[1468]: 2024-09-04 20:29:22.984 [INFO][4062] ipam.go 489: Trying affinity for 192.168.26.192/26 host="ci-3975.2.1-0-09c0a9ae8e" Sep 4 20:29:23.084428 containerd[1468]: 2024-09-04 20:29:22.988 [INFO][4062] ipam.go 155: Attempting to load block cidr=192.168.26.192/26 host="ci-3975.2.1-0-09c0a9ae8e" Sep 4 20:29:23.084428 containerd[1468]: 2024-09-04 20:29:22.996 [INFO][4062] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.26.192/26 host="ci-3975.2.1-0-09c0a9ae8e" Sep 4 20:29:23.084428 containerd[1468]: 2024-09-04 20:29:22.996 [INFO][4062] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.26.192/26 handle="k8s-pod-network.e6c728440ffadbbcc13ad89881572b784932596651904e06309a67282984ee3c" host="ci-3975.2.1-0-09c0a9ae8e" Sep 4 20:29:23.084428 containerd[1468]: 2024-09-04 20:29:23.002 [INFO][4062] ipam.go 1685: Creating new handle: k8s-pod-network.e6c728440ffadbbcc13ad89881572b784932596651904e06309a67282984ee3c Sep 4 20:29:23.084428 containerd[1468]: 2024-09-04 20:29:23.010 [INFO][4062] ipam.go 1203: Writing block in order to claim IPs block=192.168.26.192/26 handle="k8s-pod-network.e6c728440ffadbbcc13ad89881572b784932596651904e06309a67282984ee3c" host="ci-3975.2.1-0-09c0a9ae8e" Sep 4 20:29:23.084428 containerd[1468]: 2024-09-04 20:29:23.018 [INFO][4062] ipam.go 1216: Successfully claimed IPs: [192.168.26.194/26] block=192.168.26.192/26 handle="k8s-pod-network.e6c728440ffadbbcc13ad89881572b784932596651904e06309a67282984ee3c" host="ci-3975.2.1-0-09c0a9ae8e" Sep 4 20:29:23.084428 containerd[1468]: 2024-09-04 20:29:23.018 [INFO][4062] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.26.194/26] handle="k8s-pod-network.e6c728440ffadbbcc13ad89881572b784932596651904e06309a67282984ee3c" host="ci-3975.2.1-0-09c0a9ae8e" Sep 4 20:29:23.084428 containerd[1468]: 2024-09-04 20:29:23.019 [INFO][4062] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 20:29:23.084428 containerd[1468]: 2024-09-04 20:29:23.019 [INFO][4062] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.26.194/26] IPv6=[] ContainerID="e6c728440ffadbbcc13ad89881572b784932596651904e06309a67282984ee3c" HandleID="k8s-pod-network.e6c728440ffadbbcc13ad89881572b784932596651904e06309a67282984ee3c" Workload="ci--3975.2.1--0--09c0a9ae8e-k8s-coredns--7db6d8ff4d--kfqv7-eth0" Sep 4 20:29:23.085097 containerd[1468]: 2024-09-04 20:29:23.023 [INFO][4047] k8s.go 386: Populated endpoint ContainerID="e6c728440ffadbbcc13ad89881572b784932596651904e06309a67282984ee3c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-kfqv7" WorkloadEndpoint="ci--3975.2.1--0--09c0a9ae8e-k8s-coredns--7db6d8ff4d--kfqv7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--0--09c0a9ae8e-k8s-coredns--7db6d8ff4d--kfqv7-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"cf38c19c-6a60-4a36-8a3a-1b512f9dcdfb", ResourceVersion:"779", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 20, 28, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-0-09c0a9ae8e", ContainerID:"", Pod:"coredns-7db6d8ff4d-kfqv7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.26.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliba970677edf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 20:29:23.085097 containerd[1468]: 2024-09-04 20:29:23.024 [INFO][4047] k8s.go 387: Calico CNI using IPs: [192.168.26.194/32] ContainerID="e6c728440ffadbbcc13ad89881572b784932596651904e06309a67282984ee3c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-kfqv7" WorkloadEndpoint="ci--3975.2.1--0--09c0a9ae8e-k8s-coredns--7db6d8ff4d--kfqv7-eth0" Sep 4 20:29:23.085097 containerd[1468]: 2024-09-04 20:29:23.024 [INFO][4047] dataplane_linux.go 68: Setting the host side veth name to caliba970677edf ContainerID="e6c728440ffadbbcc13ad89881572b784932596651904e06309a67282984ee3c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-kfqv7" WorkloadEndpoint="ci--3975.2.1--0--09c0a9ae8e-k8s-coredns--7db6d8ff4d--kfqv7-eth0" Sep 4 20:29:23.085097 containerd[1468]: 2024-09-04 20:29:23.038 [INFO][4047] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="e6c728440ffadbbcc13ad89881572b784932596651904e06309a67282984ee3c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-kfqv7" WorkloadEndpoint="ci--3975.2.1--0--09c0a9ae8e-k8s-coredns--7db6d8ff4d--kfqv7-eth0" Sep 4 20:29:23.085097 containerd[1468]: 2024-09-04 20:29:23.042 [INFO][4047] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e6c728440ffadbbcc13ad89881572b784932596651904e06309a67282984ee3c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-kfqv7" WorkloadEndpoint="ci--3975.2.1--0--09c0a9ae8e-k8s-coredns--7db6d8ff4d--kfqv7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--0--09c0a9ae8e-k8s-coredns--7db6d8ff4d--kfqv7-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"cf38c19c-6a60-4a36-8a3a-1b512f9dcdfb", ResourceVersion:"779", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 20, 28, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-0-09c0a9ae8e", ContainerID:"e6c728440ffadbbcc13ad89881572b784932596651904e06309a67282984ee3c", Pod:"coredns-7db6d8ff4d-kfqv7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.26.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliba970677edf", MAC:"ea:b8:28:6f:de:5d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 20:29:23.085097 containerd[1468]: 2024-09-04 20:29:23.069 [INFO][4047] k8s.go 500: Wrote updated endpoint to datastore ContainerID="e6c728440ffadbbcc13ad89881572b784932596651904e06309a67282984ee3c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-kfqv7" WorkloadEndpoint="ci--3975.2.1--0--09c0a9ae8e-k8s-coredns--7db6d8ff4d--kfqv7-eth0" Sep 4 20:29:23.161789 containerd[1468]: time="2024-09-04T20:29:23.161571031Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 20:29:23.161789 containerd[1468]: time="2024-09-04T20:29:23.161685613Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 20:29:23.161789 containerd[1468]: time="2024-09-04T20:29:23.161727884Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 20:29:23.161789 containerd[1468]: time="2024-09-04T20:29:23.161751565Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 20:29:23.219260 systemd[1]: Started cri-containerd-e6c728440ffadbbcc13ad89881572b784932596651904e06309a67282984ee3c.scope - libcontainer container e6c728440ffadbbcc13ad89881572b784932596651904e06309a67282984ee3c. Sep 4 20:29:23.294409 systemd-networkd[1369]: cali7c03af6ab19: Gained IPv6LL Sep 4 20:29:23.355608 containerd[1468]: time="2024-09-04T20:29:23.355443894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-kfqv7,Uid:cf38c19c-6a60-4a36-8a3a-1b512f9dcdfb,Namespace:kube-system,Attempt:1,} returns sandbox id \"e6c728440ffadbbcc13ad89881572b784932596651904e06309a67282984ee3c\"" Sep 4 20:29:23.356753 kubelet[2535]: E0904 20:29:23.356702 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 4 20:29:23.364299 containerd[1468]: time="2024-09-04T20:29:23.362918099Z" level=info msg="CreateContainer within sandbox \"e6c728440ffadbbcc13ad89881572b784932596651904e06309a67282984ee3c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 20:29:23.380315 containerd[1468]: time="2024-09-04T20:29:23.378511691Z" level=info msg="CreateContainer within sandbox \"e6c728440ffadbbcc13ad89881572b784932596651904e06309a67282984ee3c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b5dd2bb63ef35a019b919cf86c4d770ed1ee7a38f92d393e87099b32afe08900\"" Sep 4 20:29:23.383780 containerd[1468]: time="2024-09-04T20:29:23.382175788Z" level=info msg="StartContainer for \"b5dd2bb63ef35a019b919cf86c4d770ed1ee7a38f92d393e87099b32afe08900\"" Sep 4 20:29:23.405938 sshd[4061]: pam_unix(sshd:session): session closed for user core Sep 4 20:29:23.414921 systemd[1]: sshd@8-64.23.130.28:22-139.178.68.195:43872.service: Deactivated successfully. Sep 4 20:29:23.419793 systemd[1]: session-9.scope: Deactivated successfully. Sep 4 20:29:23.422369 systemd-logind[1446]: Session 9 logged out. Waiting for processes to exit. Sep 4 20:29:23.424733 systemd-logind[1446]: Removed session 9. Sep 4 20:29:23.447518 systemd[1]: Started cri-containerd-b5dd2bb63ef35a019b919cf86c4d770ed1ee7a38f92d393e87099b32afe08900.scope - libcontainer container b5dd2bb63ef35a019b919cf86c4d770ed1ee7a38f92d393e87099b32afe08900. Sep 4 20:29:23.489814 containerd[1468]: time="2024-09-04T20:29:23.489728815Z" level=info msg="StartContainer for \"b5dd2bb63ef35a019b919cf86c4d770ed1ee7a38f92d393e87099b32afe08900\" returns successfully" Sep 4 20:29:24.071567 kubelet[2535]: E0904 20:29:24.071519 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 4 20:29:24.072390 kubelet[2535]: E0904 20:29:24.071871 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 4 20:29:24.089404 kubelet[2535]: I0904 20:29:24.088876 2535 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-kfqv7" podStartSLOduration=33.088852586 podStartE2EDuration="33.088852586s" podCreationTimestamp="2024-09-04 20:28:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 20:29:24.086575107 +0000 UTC m=+48.557143916" watchObservedRunningTime="2024-09-04 20:29:24.088852586 +0000 UTC m=+48.559421411" Sep 4 20:29:24.253532 systemd-networkd[1369]: caliba970677edf: Gained IPv6LL Sep 4 20:29:24.659042 containerd[1468]: time="2024-09-04T20:29:24.658987688Z" level=info msg="StopPodSandbox for \"d0f5dac683fe78e0ce0bca7ac04d047dc29a89cb2b5a06a87820f915c48d5f43\"" Sep 4 20:29:24.765023 containerd[1468]: 2024-09-04 20:29:24.712 [INFO][4193] k8s.go 608: Cleaning up netns ContainerID="d0f5dac683fe78e0ce0bca7ac04d047dc29a89cb2b5a06a87820f915c48d5f43" Sep 4 20:29:24.765023 containerd[1468]: 2024-09-04 20:29:24.712 [INFO][4193] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="d0f5dac683fe78e0ce0bca7ac04d047dc29a89cb2b5a06a87820f915c48d5f43" iface="eth0" netns="/var/run/netns/cni-25bbd51a-d16e-3d71-f32c-aad737a615a3" Sep 4 20:29:24.765023 containerd[1468]: 2024-09-04 20:29:24.713 [INFO][4193] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="d0f5dac683fe78e0ce0bca7ac04d047dc29a89cb2b5a06a87820f915c48d5f43" iface="eth0" netns="/var/run/netns/cni-25bbd51a-d16e-3d71-f32c-aad737a615a3" Sep 4 20:29:24.765023 containerd[1468]: 2024-09-04 20:29:24.713 [INFO][4193] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="d0f5dac683fe78e0ce0bca7ac04d047dc29a89cb2b5a06a87820f915c48d5f43" iface="eth0" netns="/var/run/netns/cni-25bbd51a-d16e-3d71-f32c-aad737a615a3" Sep 4 20:29:24.765023 containerd[1468]: 2024-09-04 20:29:24.713 [INFO][4193] k8s.go 615: Releasing IP address(es) ContainerID="d0f5dac683fe78e0ce0bca7ac04d047dc29a89cb2b5a06a87820f915c48d5f43" Sep 4 20:29:24.765023 containerd[1468]: 2024-09-04 20:29:24.713 [INFO][4193] utils.go 188: Calico CNI releasing IP address ContainerID="d0f5dac683fe78e0ce0bca7ac04d047dc29a89cb2b5a06a87820f915c48d5f43" Sep 4 20:29:24.765023 containerd[1468]: 2024-09-04 20:29:24.748 [INFO][4199] ipam_plugin.go 417: Releasing address using handleID ContainerID="d0f5dac683fe78e0ce0bca7ac04d047dc29a89cb2b5a06a87820f915c48d5f43" HandleID="k8s-pod-network.d0f5dac683fe78e0ce0bca7ac04d047dc29a89cb2b5a06a87820f915c48d5f43" Workload="ci--3975.2.1--0--09c0a9ae8e-k8s-calico--kube--controllers--68ddb4b88b--9zbcx-eth0" Sep 4 20:29:24.765023 containerd[1468]: 2024-09-04 20:29:24.748 [INFO][4199] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 20:29:24.765023 containerd[1468]: 2024-09-04 20:29:24.748 [INFO][4199] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 20:29:24.765023 containerd[1468]: 2024-09-04 20:29:24.757 [WARNING][4199] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="d0f5dac683fe78e0ce0bca7ac04d047dc29a89cb2b5a06a87820f915c48d5f43" HandleID="k8s-pod-network.d0f5dac683fe78e0ce0bca7ac04d047dc29a89cb2b5a06a87820f915c48d5f43" Workload="ci--3975.2.1--0--09c0a9ae8e-k8s-calico--kube--controllers--68ddb4b88b--9zbcx-eth0" Sep 4 20:29:24.765023 containerd[1468]: 2024-09-04 20:29:24.758 [INFO][4199] ipam_plugin.go 445: Releasing address using workloadID ContainerID="d0f5dac683fe78e0ce0bca7ac04d047dc29a89cb2b5a06a87820f915c48d5f43" HandleID="k8s-pod-network.d0f5dac683fe78e0ce0bca7ac04d047dc29a89cb2b5a06a87820f915c48d5f43" Workload="ci--3975.2.1--0--09c0a9ae8e-k8s-calico--kube--controllers--68ddb4b88b--9zbcx-eth0" Sep 4 20:29:24.765023 containerd[1468]: 2024-09-04 20:29:24.761 [INFO][4199] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 20:29:24.765023 containerd[1468]: 2024-09-04 20:29:24.763 [INFO][4193] k8s.go 621: Teardown processing complete. ContainerID="d0f5dac683fe78e0ce0bca7ac04d047dc29a89cb2b5a06a87820f915c48d5f43" Sep 4 20:29:24.769465 containerd[1468]: time="2024-09-04T20:29:24.769368490Z" level=info msg="TearDown network for sandbox \"d0f5dac683fe78e0ce0bca7ac04d047dc29a89cb2b5a06a87820f915c48d5f43\" successfully" Sep 4 20:29:24.769465 containerd[1468]: time="2024-09-04T20:29:24.769455674Z" level=info msg="StopPodSandbox for \"d0f5dac683fe78e0ce0bca7ac04d047dc29a89cb2b5a06a87820f915c48d5f43\" returns successfully" Sep 4 20:29:24.771801 containerd[1468]: time="2024-09-04T20:29:24.771764572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68ddb4b88b-9zbcx,Uid:d9d91078-829f-4383-9939-2801dd36abf2,Namespace:calico-system,Attempt:1,}" Sep 4 20:29:24.772861 systemd[1]: run-netns-cni\x2d25bbd51a\x2dd16e\x2d3d71\x2df32c\x2daad737a615a3.mount: Deactivated successfully. Sep 4 20:29:24.952835 systemd-networkd[1369]: cali6020c14821f: Link UP Sep 4 20:29:24.953062 systemd-networkd[1369]: cali6020c14821f: Gained carrier Sep 4 20:29:24.978645 containerd[1468]: 2024-09-04 20:29:24.845 [INFO][4205] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.2.1--0--09c0a9ae8e-k8s-calico--kube--controllers--68ddb4b88b--9zbcx-eth0 calico-kube-controllers-68ddb4b88b- calico-system d9d91078-829f-4383-9939-2801dd36abf2 821 0 2024-09-04 20:28:57 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:68ddb4b88b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-3975.2.1-0-09c0a9ae8e calico-kube-controllers-68ddb4b88b-9zbcx eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali6020c14821f [] []}} ContainerID="9ccb3730a6ac186acd1bfb24b3d830b87d91818e883910085c6f5ce2baaea504" Namespace="calico-system" Pod="calico-kube-controllers-68ddb4b88b-9zbcx" WorkloadEndpoint="ci--3975.2.1--0--09c0a9ae8e-k8s-calico--kube--controllers--68ddb4b88b--9zbcx-" Sep 4 20:29:24.978645 containerd[1468]: 2024-09-04 20:29:24.845 [INFO][4205] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9ccb3730a6ac186acd1bfb24b3d830b87d91818e883910085c6f5ce2baaea504" Namespace="calico-system" Pod="calico-kube-controllers-68ddb4b88b-9zbcx" WorkloadEndpoint="ci--3975.2.1--0--09c0a9ae8e-k8s-calico--kube--controllers--68ddb4b88b--9zbcx-eth0" Sep 4 20:29:24.978645 containerd[1468]: 2024-09-04 20:29:24.881 [INFO][4216] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9ccb3730a6ac186acd1bfb24b3d830b87d91818e883910085c6f5ce2baaea504" HandleID="k8s-pod-network.9ccb3730a6ac186acd1bfb24b3d830b87d91818e883910085c6f5ce2baaea504" Workload="ci--3975.2.1--0--09c0a9ae8e-k8s-calico--kube--controllers--68ddb4b88b--9zbcx-eth0" Sep 4 20:29:24.978645 containerd[1468]: 2024-09-04 20:29:24.892 [INFO][4216] ipam_plugin.go 270: Auto assigning IP ContainerID="9ccb3730a6ac186acd1bfb24b3d830b87d91818e883910085c6f5ce2baaea504" HandleID="k8s-pod-network.9ccb3730a6ac186acd1bfb24b3d830b87d91818e883910085c6f5ce2baaea504" Workload="ci--3975.2.1--0--09c0a9ae8e-k8s-calico--kube--controllers--68ddb4b88b--9zbcx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002916a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3975.2.1-0-09c0a9ae8e", "pod":"calico-kube-controllers-68ddb4b88b-9zbcx", "timestamp":"2024-09-04 20:29:24.881205624 +0000 UTC"}, Hostname:"ci-3975.2.1-0-09c0a9ae8e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 20:29:24.978645 containerd[1468]: 2024-09-04 20:29:24.892 [INFO][4216] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 20:29:24.978645 containerd[1468]: 2024-09-04 20:29:24.892 [INFO][4216] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 20:29:24.978645 containerd[1468]: 2024-09-04 20:29:24.892 [INFO][4216] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.2.1-0-09c0a9ae8e' Sep 4 20:29:24.978645 containerd[1468]: 2024-09-04 20:29:24.895 [INFO][4216] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9ccb3730a6ac186acd1bfb24b3d830b87d91818e883910085c6f5ce2baaea504" host="ci-3975.2.1-0-09c0a9ae8e" Sep 4 20:29:24.978645 containerd[1468]: 2024-09-04 20:29:24.907 [INFO][4216] ipam.go 372: Looking up existing affinities for host host="ci-3975.2.1-0-09c0a9ae8e" Sep 4 20:29:24.978645 containerd[1468]: 2024-09-04 20:29:24.918 [INFO][4216] ipam.go 489: Trying affinity for 192.168.26.192/26 host="ci-3975.2.1-0-09c0a9ae8e" Sep 4 20:29:24.978645 containerd[1468]: 2024-09-04 20:29:24.922 [INFO][4216] ipam.go 155: Attempting to load block cidr=192.168.26.192/26 host="ci-3975.2.1-0-09c0a9ae8e" Sep 4 20:29:24.978645 containerd[1468]: 2024-09-04 20:29:24.927 [INFO][4216] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.26.192/26 host="ci-3975.2.1-0-09c0a9ae8e" Sep 4 20:29:24.978645 containerd[1468]: 2024-09-04 20:29:24.927 [INFO][4216] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.26.192/26 handle="k8s-pod-network.9ccb3730a6ac186acd1bfb24b3d830b87d91818e883910085c6f5ce2baaea504" host="ci-3975.2.1-0-09c0a9ae8e" Sep 4 20:29:24.978645 containerd[1468]: 2024-09-04 20:29:24.931 [INFO][4216] ipam.go 1685: Creating new handle: k8s-pod-network.9ccb3730a6ac186acd1bfb24b3d830b87d91818e883910085c6f5ce2baaea504 Sep 4 20:29:24.978645 containerd[1468]: 2024-09-04 20:29:24.938 [INFO][4216] ipam.go 1203: Writing block in order to claim IPs block=192.168.26.192/26 handle="k8s-pod-network.9ccb3730a6ac186acd1bfb24b3d830b87d91818e883910085c6f5ce2baaea504" host="ci-3975.2.1-0-09c0a9ae8e" Sep 4 20:29:24.978645 containerd[1468]: 2024-09-04 20:29:24.945 [INFO][4216] ipam.go 1216: Successfully claimed IPs: [192.168.26.195/26] block=192.168.26.192/26 handle="k8s-pod-network.9ccb3730a6ac186acd1bfb24b3d830b87d91818e883910085c6f5ce2baaea504" host="ci-3975.2.1-0-09c0a9ae8e" Sep 4 20:29:24.978645 containerd[1468]: 2024-09-04 20:29:24.945 [INFO][4216] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.26.195/26] handle="k8s-pod-network.9ccb3730a6ac186acd1bfb24b3d830b87d91818e883910085c6f5ce2baaea504" host="ci-3975.2.1-0-09c0a9ae8e" Sep 4 20:29:24.978645 containerd[1468]: 2024-09-04 20:29:24.946 [INFO][4216] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 20:29:24.978645 containerd[1468]: 2024-09-04 20:29:24.946 [INFO][4216] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.26.195/26] IPv6=[] ContainerID="9ccb3730a6ac186acd1bfb24b3d830b87d91818e883910085c6f5ce2baaea504" HandleID="k8s-pod-network.9ccb3730a6ac186acd1bfb24b3d830b87d91818e883910085c6f5ce2baaea504" Workload="ci--3975.2.1--0--09c0a9ae8e-k8s-calico--kube--controllers--68ddb4b88b--9zbcx-eth0" Sep 4 20:29:24.980892 containerd[1468]: 2024-09-04 20:29:24.948 [INFO][4205] k8s.go 386: Populated endpoint ContainerID="9ccb3730a6ac186acd1bfb24b3d830b87d91818e883910085c6f5ce2baaea504" Namespace="calico-system" Pod="calico-kube-controllers-68ddb4b88b-9zbcx" WorkloadEndpoint="ci--3975.2.1--0--09c0a9ae8e-k8s-calico--kube--controllers--68ddb4b88b--9zbcx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--0--09c0a9ae8e-k8s-calico--kube--controllers--68ddb4b88b--9zbcx-eth0", GenerateName:"calico-kube-controllers-68ddb4b88b-", Namespace:"calico-system", SelfLink:"", UID:"d9d91078-829f-4383-9939-2801dd36abf2", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 20, 28, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"68ddb4b88b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-0-09c0a9ae8e", ContainerID:"", Pod:"calico-kube-controllers-68ddb4b88b-9zbcx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.26.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6020c14821f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 20:29:24.980892 containerd[1468]: 2024-09-04 20:29:24.949 [INFO][4205] k8s.go 387: Calico CNI using IPs: [192.168.26.195/32] ContainerID="9ccb3730a6ac186acd1bfb24b3d830b87d91818e883910085c6f5ce2baaea504" Namespace="calico-system" Pod="calico-kube-controllers-68ddb4b88b-9zbcx" WorkloadEndpoint="ci--3975.2.1--0--09c0a9ae8e-k8s-calico--kube--controllers--68ddb4b88b--9zbcx-eth0" Sep 4 20:29:24.980892 containerd[1468]: 2024-09-04 20:29:24.949 [INFO][4205] dataplane_linux.go 68: Setting the host side veth name to cali6020c14821f ContainerID="9ccb3730a6ac186acd1bfb24b3d830b87d91818e883910085c6f5ce2baaea504" Namespace="calico-system" Pod="calico-kube-controllers-68ddb4b88b-9zbcx" WorkloadEndpoint="ci--3975.2.1--0--09c0a9ae8e-k8s-calico--kube--controllers--68ddb4b88b--9zbcx-eth0" Sep 4 20:29:24.980892 containerd[1468]: 2024-09-04 20:29:24.952 [INFO][4205] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="9ccb3730a6ac186acd1bfb24b3d830b87d91818e883910085c6f5ce2baaea504" Namespace="calico-system" Pod="calico-kube-controllers-68ddb4b88b-9zbcx" WorkloadEndpoint="ci--3975.2.1--0--09c0a9ae8e-k8s-calico--kube--controllers--68ddb4b88b--9zbcx-eth0" Sep 4 20:29:24.980892 containerd[1468]: 2024-09-04 20:29:24.953 [INFO][4205] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9ccb3730a6ac186acd1bfb24b3d830b87d91818e883910085c6f5ce2baaea504" Namespace="calico-system" Pod="calico-kube-controllers-68ddb4b88b-9zbcx" WorkloadEndpoint="ci--3975.2.1--0--09c0a9ae8e-k8s-calico--kube--controllers--68ddb4b88b--9zbcx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--0--09c0a9ae8e-k8s-calico--kube--controllers--68ddb4b88b--9zbcx-eth0", GenerateName:"calico-kube-controllers-68ddb4b88b-", Namespace:"calico-system", SelfLink:"", UID:"d9d91078-829f-4383-9939-2801dd36abf2", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 20, 28, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"68ddb4b88b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-0-09c0a9ae8e", ContainerID:"9ccb3730a6ac186acd1bfb24b3d830b87d91818e883910085c6f5ce2baaea504", Pod:"calico-kube-controllers-68ddb4b88b-9zbcx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.26.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6020c14821f", MAC:"1a:17:fa:84:45:8c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 20:29:24.980892 containerd[1468]: 2024-09-04 20:29:24.965 [INFO][4205] k8s.go 500: Wrote updated endpoint to datastore ContainerID="9ccb3730a6ac186acd1bfb24b3d830b87d91818e883910085c6f5ce2baaea504" Namespace="calico-system" Pod="calico-kube-controllers-68ddb4b88b-9zbcx" WorkloadEndpoint="ci--3975.2.1--0--09c0a9ae8e-k8s-calico--kube--controllers--68ddb4b88b--9zbcx-eth0" Sep 4 20:29:25.011350 containerd[1468]: time="2024-09-04T20:29:25.011027880Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 20:29:25.011350 containerd[1468]: time="2024-09-04T20:29:25.011100686Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 20:29:25.011350 containerd[1468]: time="2024-09-04T20:29:25.011142852Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 20:29:25.011350 containerd[1468]: time="2024-09-04T20:29:25.011163256Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 20:29:25.043569 systemd[1]: Started cri-containerd-9ccb3730a6ac186acd1bfb24b3d830b87d91818e883910085c6f5ce2baaea504.scope - libcontainer container 9ccb3730a6ac186acd1bfb24b3d830b87d91818e883910085c6f5ce2baaea504. Sep 4 20:29:25.075765 kubelet[2535]: E0904 20:29:25.075679 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 4 20:29:25.114179 containerd[1468]: time="2024-09-04T20:29:25.114117917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68ddb4b88b-9zbcx,Uid:d9d91078-829f-4383-9939-2801dd36abf2,Namespace:calico-system,Attempt:1,} returns sandbox id \"9ccb3730a6ac186acd1bfb24b3d830b87d91818e883910085c6f5ce2baaea504\"" Sep 4 20:29:25.120050 containerd[1468]: time="2024-09-04T20:29:25.120005553Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\"" Sep 4 20:29:25.658696 containerd[1468]: time="2024-09-04T20:29:25.658652419Z" level=info msg="StopPodSandbox for \"5e9fc370fba866880d30ab3cbc0dd85fc160f9693431c6ea8da9398464435ae4\"" Sep 4 20:29:25.807167 containerd[1468]: 2024-09-04 20:29:25.731 [INFO][4293] k8s.go 608: Cleaning up netns ContainerID="5e9fc370fba866880d30ab3cbc0dd85fc160f9693431c6ea8da9398464435ae4" Sep 4 20:29:25.807167 containerd[1468]: 2024-09-04 20:29:25.731 [INFO][4293] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="5e9fc370fba866880d30ab3cbc0dd85fc160f9693431c6ea8da9398464435ae4" iface="eth0" netns="/var/run/netns/cni-96a5a3a9-879e-9d43-1dca-132bf2c2d2ce" Sep 4 20:29:25.807167 containerd[1468]: 2024-09-04 20:29:25.732 [INFO][4293] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="5e9fc370fba866880d30ab3cbc0dd85fc160f9693431c6ea8da9398464435ae4" iface="eth0" netns="/var/run/netns/cni-96a5a3a9-879e-9d43-1dca-132bf2c2d2ce" Sep 4 20:29:25.807167 containerd[1468]: 2024-09-04 20:29:25.732 [INFO][4293] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="5e9fc370fba866880d30ab3cbc0dd85fc160f9693431c6ea8da9398464435ae4" iface="eth0" netns="/var/run/netns/cni-96a5a3a9-879e-9d43-1dca-132bf2c2d2ce" Sep 4 20:29:25.807167 containerd[1468]: 2024-09-04 20:29:25.732 [INFO][4293] k8s.go 615: Releasing IP address(es) ContainerID="5e9fc370fba866880d30ab3cbc0dd85fc160f9693431c6ea8da9398464435ae4" Sep 4 20:29:25.807167 containerd[1468]: 2024-09-04 20:29:25.732 [INFO][4293] utils.go 188: Calico CNI releasing IP address ContainerID="5e9fc370fba866880d30ab3cbc0dd85fc160f9693431c6ea8da9398464435ae4" Sep 4 20:29:25.807167 containerd[1468]: 2024-09-04 20:29:25.782 [INFO][4299] ipam_plugin.go 417: Releasing address using handleID ContainerID="5e9fc370fba866880d30ab3cbc0dd85fc160f9693431c6ea8da9398464435ae4" HandleID="k8s-pod-network.5e9fc370fba866880d30ab3cbc0dd85fc160f9693431c6ea8da9398464435ae4" Workload="ci--3975.2.1--0--09c0a9ae8e-k8s-csi--node--driver--bsn66-eth0" Sep 4 20:29:25.807167 containerd[1468]: 2024-09-04 20:29:25.782 [INFO][4299] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 20:29:25.807167 containerd[1468]: 2024-09-04 20:29:25.783 [INFO][4299] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 20:29:25.807167 containerd[1468]: 2024-09-04 20:29:25.794 [WARNING][4299] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="5e9fc370fba866880d30ab3cbc0dd85fc160f9693431c6ea8da9398464435ae4" HandleID="k8s-pod-network.5e9fc370fba866880d30ab3cbc0dd85fc160f9693431c6ea8da9398464435ae4" Workload="ci--3975.2.1--0--09c0a9ae8e-k8s-csi--node--driver--bsn66-eth0" Sep 4 20:29:25.807167 containerd[1468]: 2024-09-04 20:29:25.794 [INFO][4299] ipam_plugin.go 445: Releasing address using workloadID ContainerID="5e9fc370fba866880d30ab3cbc0dd85fc160f9693431c6ea8da9398464435ae4" HandleID="k8s-pod-network.5e9fc370fba866880d30ab3cbc0dd85fc160f9693431c6ea8da9398464435ae4" Workload="ci--3975.2.1--0--09c0a9ae8e-k8s-csi--node--driver--bsn66-eth0" Sep 4 20:29:25.807167 containerd[1468]: 2024-09-04 20:29:25.798 [INFO][4299] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 20:29:25.807167 containerd[1468]: 2024-09-04 20:29:25.803 [INFO][4293] k8s.go 621: Teardown processing complete. ContainerID="5e9fc370fba866880d30ab3cbc0dd85fc160f9693431c6ea8da9398464435ae4" Sep 4 20:29:25.810308 containerd[1468]: time="2024-09-04T20:29:25.809347238Z" level=info msg="TearDown network for sandbox \"5e9fc370fba866880d30ab3cbc0dd85fc160f9693431c6ea8da9398464435ae4\" successfully" Sep 4 20:29:25.810308 containerd[1468]: time="2024-09-04T20:29:25.809410917Z" level=info msg="StopPodSandbox for \"5e9fc370fba866880d30ab3cbc0dd85fc160f9693431c6ea8da9398464435ae4\" returns successfully" Sep 4 20:29:25.811838 containerd[1468]: time="2024-09-04T20:29:25.811788203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bsn66,Uid:c943a141-800d-4fd7-b526-a302d60b317a,Namespace:calico-system,Attempt:1,}" Sep 4 20:29:25.814696 systemd[1]: run-netns-cni\x2d96a5a3a9\x2d879e\x2d9d43\x2d1dca\x2d132bf2c2d2ce.mount: Deactivated successfully. Sep 4 20:29:26.020510 systemd-networkd[1369]: cali839545eac8d: Link UP Sep 4 20:29:26.023646 systemd-networkd[1369]: cali839545eac8d: Gained carrier Sep 4 20:29:26.061063 containerd[1468]: 2024-09-04 20:29:25.887 [INFO][4309] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.2.1--0--09c0a9ae8e-k8s-csi--node--driver--bsn66-eth0 csi-node-driver- calico-system c943a141-800d-4fd7-b526-a302d60b317a 832 0 2024-09-04 20:28:57 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65cb9bb8f4 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ci-3975.2.1-0-09c0a9ae8e csi-node-driver-bsn66 eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali839545eac8d [] []}} ContainerID="347cd183b6698418b0ea7f0a29fd1cd62c37f5c370cf4b70eaf87397e3e98678" Namespace="calico-system" Pod="csi-node-driver-bsn66" WorkloadEndpoint="ci--3975.2.1--0--09c0a9ae8e-k8s-csi--node--driver--bsn66-" Sep 4 20:29:26.061063 containerd[1468]: 2024-09-04 20:29:25.887 [INFO][4309] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="347cd183b6698418b0ea7f0a29fd1cd62c37f5c370cf4b70eaf87397e3e98678" Namespace="calico-system" Pod="csi-node-driver-bsn66" WorkloadEndpoint="ci--3975.2.1--0--09c0a9ae8e-k8s-csi--node--driver--bsn66-eth0" Sep 4 20:29:26.061063 containerd[1468]: 2024-09-04 20:29:25.936 [INFO][4316] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="347cd183b6698418b0ea7f0a29fd1cd62c37f5c370cf4b70eaf87397e3e98678" HandleID="k8s-pod-network.347cd183b6698418b0ea7f0a29fd1cd62c37f5c370cf4b70eaf87397e3e98678" Workload="ci--3975.2.1--0--09c0a9ae8e-k8s-csi--node--driver--bsn66-eth0" Sep 4 20:29:26.061063 containerd[1468]: 2024-09-04 20:29:25.952 [INFO][4316] ipam_plugin.go 270: Auto assigning IP ContainerID="347cd183b6698418b0ea7f0a29fd1cd62c37f5c370cf4b70eaf87397e3e98678" HandleID="k8s-pod-network.347cd183b6698418b0ea7f0a29fd1cd62c37f5c370cf4b70eaf87397e3e98678" Workload="ci--3975.2.1--0--09c0a9ae8e-k8s-csi--node--driver--bsn66-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318040), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3975.2.1-0-09c0a9ae8e", "pod":"csi-node-driver-bsn66", "timestamp":"2024-09-04 20:29:25.936884919 +0000 UTC"}, Hostname:"ci-3975.2.1-0-09c0a9ae8e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 20:29:26.061063 containerd[1468]: 2024-09-04 20:29:25.952 [INFO][4316] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 20:29:26.061063 containerd[1468]: 2024-09-04 20:29:25.952 [INFO][4316] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 20:29:26.061063 containerd[1468]: 2024-09-04 20:29:25.952 [INFO][4316] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.2.1-0-09c0a9ae8e' Sep 4 20:29:26.061063 containerd[1468]: 2024-09-04 20:29:25.955 [INFO][4316] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.347cd183b6698418b0ea7f0a29fd1cd62c37f5c370cf4b70eaf87397e3e98678" host="ci-3975.2.1-0-09c0a9ae8e" Sep 4 20:29:26.061063 containerd[1468]: 2024-09-04 20:29:25.968 [INFO][4316] ipam.go 372: Looking up existing affinities for host host="ci-3975.2.1-0-09c0a9ae8e" Sep 4 20:29:26.061063 containerd[1468]: 2024-09-04 20:29:25.977 [INFO][4316] ipam.go 489: Trying affinity for 192.168.26.192/26 host="ci-3975.2.1-0-09c0a9ae8e" Sep 4 20:29:26.061063 containerd[1468]: 2024-09-04 20:29:25.981 [INFO][4316] ipam.go 155: Attempting to load block cidr=192.168.26.192/26 host="ci-3975.2.1-0-09c0a9ae8e" Sep 4 20:29:26.061063 containerd[1468]: 2024-09-04 20:29:25.986 [INFO][4316] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.26.192/26 host="ci-3975.2.1-0-09c0a9ae8e" Sep 4 20:29:26.061063 containerd[1468]: 2024-09-04 20:29:25.986 [INFO][4316] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.26.192/26 handle="k8s-pod-network.347cd183b6698418b0ea7f0a29fd1cd62c37f5c370cf4b70eaf87397e3e98678" host="ci-3975.2.1-0-09c0a9ae8e" Sep 4 20:29:26.061063 containerd[1468]: 2024-09-04 20:29:25.989 [INFO][4316] ipam.go 1685: Creating new handle: k8s-pod-network.347cd183b6698418b0ea7f0a29fd1cd62c37f5c370cf4b70eaf87397e3e98678 Sep 4 20:29:26.061063 containerd[1468]: 2024-09-04 20:29:25.995 [INFO][4316] ipam.go 1203: Writing block in order to claim IPs block=192.168.26.192/26 handle="k8s-pod-network.347cd183b6698418b0ea7f0a29fd1cd62c37f5c370cf4b70eaf87397e3e98678" host="ci-3975.2.1-0-09c0a9ae8e" Sep 4 20:29:26.061063 containerd[1468]: 2024-09-04 20:29:26.004 [INFO][4316] ipam.go 1216: Successfully claimed IPs: [192.168.26.196/26] block=192.168.26.192/26 handle="k8s-pod-network.347cd183b6698418b0ea7f0a29fd1cd62c37f5c370cf4b70eaf87397e3e98678" host="ci-3975.2.1-0-09c0a9ae8e" Sep 4 20:29:26.061063 containerd[1468]: 2024-09-04 20:29:26.005 [INFO][4316] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.26.196/26] handle="k8s-pod-network.347cd183b6698418b0ea7f0a29fd1cd62c37f5c370cf4b70eaf87397e3e98678" host="ci-3975.2.1-0-09c0a9ae8e" Sep 4 20:29:26.061063 containerd[1468]: 2024-09-04 20:29:26.005 [INFO][4316] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 20:29:26.061063 containerd[1468]: 2024-09-04 20:29:26.005 [INFO][4316] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.26.196/26] IPv6=[] ContainerID="347cd183b6698418b0ea7f0a29fd1cd62c37f5c370cf4b70eaf87397e3e98678" HandleID="k8s-pod-network.347cd183b6698418b0ea7f0a29fd1cd62c37f5c370cf4b70eaf87397e3e98678" Workload="ci--3975.2.1--0--09c0a9ae8e-k8s-csi--node--driver--bsn66-eth0" Sep 4 20:29:26.063853 containerd[1468]: 2024-09-04 20:29:26.008 [INFO][4309] k8s.go 386: Populated endpoint ContainerID="347cd183b6698418b0ea7f0a29fd1cd62c37f5c370cf4b70eaf87397e3e98678" Namespace="calico-system" Pod="csi-node-driver-bsn66" WorkloadEndpoint="ci--3975.2.1--0--09c0a9ae8e-k8s-csi--node--driver--bsn66-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--0--09c0a9ae8e-k8s-csi--node--driver--bsn66-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c943a141-800d-4fd7-b526-a302d60b317a", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 20, 28, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65cb9bb8f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-0-09c0a9ae8e", ContainerID:"", Pod:"csi-node-driver-bsn66", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.26.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali839545eac8d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 20:29:26.063853 containerd[1468]: 2024-09-04 20:29:26.009 [INFO][4309] k8s.go 387: Calico CNI using IPs: [192.168.26.196/32] ContainerID="347cd183b6698418b0ea7f0a29fd1cd62c37f5c370cf4b70eaf87397e3e98678" Namespace="calico-system" Pod="csi-node-driver-bsn66" WorkloadEndpoint="ci--3975.2.1--0--09c0a9ae8e-k8s-csi--node--driver--bsn66-eth0" Sep 4 20:29:26.063853 containerd[1468]: 2024-09-04 20:29:26.009 [INFO][4309] dataplane_linux.go 68: Setting the host side veth name to cali839545eac8d ContainerID="347cd183b6698418b0ea7f0a29fd1cd62c37f5c370cf4b70eaf87397e3e98678" Namespace="calico-system" Pod="csi-node-driver-bsn66" WorkloadEndpoint="ci--3975.2.1--0--09c0a9ae8e-k8s-csi--node--driver--bsn66-eth0" Sep 4 20:29:26.063853 containerd[1468]: 2024-09-04 20:29:26.023 [INFO][4309] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="347cd183b6698418b0ea7f0a29fd1cd62c37f5c370cf4b70eaf87397e3e98678" Namespace="calico-system" Pod="csi-node-driver-bsn66" WorkloadEndpoint="ci--3975.2.1--0--09c0a9ae8e-k8s-csi--node--driver--bsn66-eth0" Sep 4 20:29:26.063853 containerd[1468]: 2024-09-04 20:29:26.026 [INFO][4309] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="347cd183b6698418b0ea7f0a29fd1cd62c37f5c370cf4b70eaf87397e3e98678" Namespace="calico-system" Pod="csi-node-driver-bsn66" WorkloadEndpoint="ci--3975.2.1--0--09c0a9ae8e-k8s-csi--node--driver--bsn66-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--0--09c0a9ae8e-k8s-csi--node--driver--bsn66-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c943a141-800d-4fd7-b526-a302d60b317a", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 20, 28, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65cb9bb8f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-0-09c0a9ae8e", ContainerID:"347cd183b6698418b0ea7f0a29fd1cd62c37f5c370cf4b70eaf87397e3e98678", Pod:"csi-node-driver-bsn66", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.26.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali839545eac8d", MAC:"26:e4:b7:1c:4d:13", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 20:29:26.063853 containerd[1468]: 2024-09-04 20:29:26.054 [INFO][4309] k8s.go 500: Wrote updated endpoint to datastore ContainerID="347cd183b6698418b0ea7f0a29fd1cd62c37f5c370cf4b70eaf87397e3e98678" Namespace="calico-system" Pod="csi-node-driver-bsn66" WorkloadEndpoint="ci--3975.2.1--0--09c0a9ae8e-k8s-csi--node--driver--bsn66-eth0" Sep 4 20:29:26.090520 kubelet[2535]: E0904 20:29:26.090461 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 4 20:29:26.141502 containerd[1468]: time="2024-09-04T20:29:26.140648024Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 20:29:26.141502 containerd[1468]: time="2024-09-04T20:29:26.141003365Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 20:29:26.141502 containerd[1468]: time="2024-09-04T20:29:26.141057636Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 20:29:26.141502 containerd[1468]: time="2024-09-04T20:29:26.141078763Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 20:29:26.186522 systemd[1]: Started cri-containerd-347cd183b6698418b0ea7f0a29fd1cd62c37f5c370cf4b70eaf87397e3e98678.scope - libcontainer container 347cd183b6698418b0ea7f0a29fd1cd62c37f5c370cf4b70eaf87397e3e98678. Sep 4 20:29:26.222597 containerd[1468]: time="2024-09-04T20:29:26.222544130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bsn66,Uid:c943a141-800d-4fd7-b526-a302d60b317a,Namespace:calico-system,Attempt:1,} returns sandbox id \"347cd183b6698418b0ea7f0a29fd1cd62c37f5c370cf4b70eaf87397e3e98678\"" Sep 4 20:29:26.814461 systemd-networkd[1369]: cali6020c14821f: Gained IPv6LL Sep 4 20:29:27.507428 containerd[1468]: time="2024-09-04T20:29:27.507358039Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:29:27.508871 containerd[1468]: time="2024-09-04T20:29:27.508812476Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.1: active requests=0, bytes read=33507125" Sep 4 20:29:27.509756 containerd[1468]: time="2024-09-04T20:29:27.509717225Z" level=info msg="ImageCreate event name:\"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:29:27.520264 containerd[1468]: time="2024-09-04T20:29:27.519855887Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:29:27.524384 containerd[1468]: time="2024-09-04T20:29:27.524329804Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" with image id \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\", size \"34999494\" in 2.404277035s" Sep 4 20:29:27.524567 containerd[1468]: time="2024-09-04T20:29:27.524425320Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" returns image reference \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\"" Sep 4 20:29:27.527160 containerd[1468]: time="2024-09-04T20:29:27.527113907Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\"" Sep 4 20:29:27.551773 containerd[1468]: time="2024-09-04T20:29:27.551720908Z" level=info msg="CreateContainer within sandbox \"9ccb3730a6ac186acd1bfb24b3d830b87d91818e883910085c6f5ce2baaea504\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 4 20:29:27.608388 containerd[1468]: time="2024-09-04T20:29:27.608285728Z" level=info msg="CreateContainer within sandbox \"9ccb3730a6ac186acd1bfb24b3d830b87d91818e883910085c6f5ce2baaea504\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"d4c3283502c96b2b56d362636c24b860fee91f34548a6793ccb37510e1896fc3\"" Sep 4 20:29:27.609481 containerd[1468]: time="2024-09-04T20:29:27.609452491Z" level=info msg="StartContainer for \"d4c3283502c96b2b56d362636c24b860fee91f34548a6793ccb37510e1896fc3\"" Sep 4 20:29:27.646358 systemd-networkd[1369]: cali839545eac8d: Gained IPv6LL Sep 4 20:29:27.668777 systemd[1]: Started cri-containerd-d4c3283502c96b2b56d362636c24b860fee91f34548a6793ccb37510e1896fc3.scope - libcontainer container d4c3283502c96b2b56d362636c24b860fee91f34548a6793ccb37510e1896fc3. Sep 4 20:29:27.730852 containerd[1468]: time="2024-09-04T20:29:27.730756958Z" level=info msg="StartContainer for \"d4c3283502c96b2b56d362636c24b860fee91f34548a6793ccb37510e1896fc3\" returns successfully" Sep 4 20:29:28.200198 kubelet[2535]: I0904 20:29:28.199585 2535 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-68ddb4b88b-9zbcx" podStartSLOduration=28.790728943 podStartE2EDuration="31.199557715s" podCreationTimestamp="2024-09-04 20:28:57 +0000 UTC" firstStartedPulling="2024-09-04 20:29:25.116489771 +0000 UTC m=+49.587058574" lastFinishedPulling="2024-09-04 20:29:27.525318559 +0000 UTC m=+51.995887346" observedRunningTime="2024-09-04 20:29:28.127402951 +0000 UTC m=+52.597971759" watchObservedRunningTime="2024-09-04 20:29:28.199557715 +0000 UTC m=+52.670126514" Sep 4 20:29:28.434552 systemd[1]: Started sshd@9-64.23.130.28:22-139.178.68.195:53234.service - OpenSSH per-connection server daemon (139.178.68.195:53234). Sep 4 20:29:28.517837 sshd[4445]: Accepted publickey for core from 139.178.68.195 port 53234 ssh2: RSA SHA256:6m86ErQYPfwi49NZRVftW/USO9k3FxgPtHd71f+HMpY Sep 4 20:29:28.519319 sshd[4445]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 20:29:28.528481 systemd-logind[1446]: New session 10 of user core. Sep 4 20:29:28.532511 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 4 20:29:28.871188 sshd[4445]: pam_unix(sshd:session): session closed for user core Sep 4 20:29:28.882555 systemd[1]: sshd@9-64.23.130.28:22-139.178.68.195:53234.service: Deactivated successfully. Sep 4 20:29:28.888450 systemd[1]: session-10.scope: Deactivated successfully. Sep 4 20:29:28.891534 systemd-logind[1446]: Session 10 logged out. Waiting for processes to exit. Sep 4 20:29:28.902815 systemd[1]: Started sshd@10-64.23.130.28:22-139.178.68.195:53250.service - OpenSSH per-connection server daemon (139.178.68.195:53250). Sep 4 20:29:28.905592 systemd-logind[1446]: Removed session 10. Sep 4 20:29:28.965792 sshd[4459]: Accepted publickey for core from 139.178.68.195 port 53250 ssh2: RSA SHA256:6m86ErQYPfwi49NZRVftW/USO9k3FxgPtHd71f+HMpY Sep 4 20:29:28.967550 sshd[4459]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 20:29:28.976641 systemd-logind[1446]: New session 11 of user core. Sep 4 20:29:28.983769 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 4 20:29:29.255377 sshd[4459]: pam_unix(sshd:session): session closed for user core Sep 4 20:29:29.276104 systemd[1]: sshd@10-64.23.130.28:22-139.178.68.195:53250.service: Deactivated successfully. Sep 4 20:29:29.282158 systemd[1]: session-11.scope: Deactivated successfully. Sep 4 20:29:29.288075 systemd-logind[1446]: Session 11 logged out. Waiting for processes to exit. Sep 4 20:29:29.298800 systemd[1]: Started sshd@11-64.23.130.28:22-139.178.68.195:53262.service - OpenSSH per-connection server daemon (139.178.68.195:53262). Sep 4 20:29:29.311513 systemd-logind[1446]: Removed session 11. Sep 4 20:29:29.424409 sshd[4474]: Accepted publickey for core from 139.178.68.195 port 53262 ssh2: RSA SHA256:6m86ErQYPfwi49NZRVftW/USO9k3FxgPtHd71f+HMpY Sep 4 20:29:29.430767 sshd[4474]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 20:29:29.447330 systemd-logind[1446]: New session 12 of user core. Sep 4 20:29:29.453371 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 4 20:29:29.480978 containerd[1468]: time="2024-09-04T20:29:29.480920158Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:29:29.483430 containerd[1468]: time="2024-09-04T20:29:29.482623882Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.1: active requests=0, bytes read=7642081" Sep 4 20:29:29.483430 containerd[1468]: time="2024-09-04T20:29:29.483301378Z" level=info msg="ImageCreate event name:\"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:29:29.486888 containerd[1468]: time="2024-09-04T20:29:29.486518005Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:29:29.487385 containerd[1468]: time="2024-09-04T20:29:29.487351298Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.1\" with image id \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\", size \"9134482\" in 1.960198262s" Sep 4 20:29:29.487498 containerd[1468]: time="2024-09-04T20:29:29.487390215Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\" returns image reference \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\"" Sep 4 20:29:29.495319 containerd[1468]: time="2024-09-04T20:29:29.495158592Z" level=info msg="CreateContainer within sandbox \"347cd183b6698418b0ea7f0a29fd1cd62c37f5c370cf4b70eaf87397e3e98678\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 4 20:29:29.522662 containerd[1468]: time="2024-09-04T20:29:29.522520747Z" level=info msg="CreateContainer within sandbox \"347cd183b6698418b0ea7f0a29fd1cd62c37f5c370cf4b70eaf87397e3e98678\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"30607b030ee25a3b896162636757ed14c2b8c83c96d1b288131864aba913b530\"" Sep 4 20:29:29.523800 containerd[1468]: time="2024-09-04T20:29:29.523753635Z" level=info msg="StartContainer for \"30607b030ee25a3b896162636757ed14c2b8c83c96d1b288131864aba913b530\"" Sep 4 20:29:29.723648 systemd[1]: Started cri-containerd-30607b030ee25a3b896162636757ed14c2b8c83c96d1b288131864aba913b530.scope - libcontainer container 30607b030ee25a3b896162636757ed14c2b8c83c96d1b288131864aba913b530. Sep 4 20:29:29.790868 sshd[4474]: pam_unix(sshd:session): session closed for user core Sep 4 20:29:29.801960 systemd[1]: sshd@11-64.23.130.28:22-139.178.68.195:53262.service: Deactivated successfully. Sep 4 20:29:29.806611 systemd[1]: session-12.scope: Deactivated successfully. Sep 4 20:29:29.810715 systemd-logind[1446]: Session 12 logged out. Waiting for processes to exit. Sep 4 20:29:29.814761 systemd-logind[1446]: Removed session 12. Sep 4 20:29:29.849926 containerd[1468]: time="2024-09-04T20:29:29.849871667Z" level=info msg="StartContainer for \"30607b030ee25a3b896162636757ed14c2b8c83c96d1b288131864aba913b530\" returns successfully" Sep 4 20:29:29.853149 containerd[1468]: time="2024-09-04T20:29:29.852349498Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\"" Sep 4 20:29:31.654380 containerd[1468]: time="2024-09-04T20:29:31.654246075Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:29:31.656668 containerd[1468]: time="2024-09-04T20:29:31.655893843Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1: active requests=0, bytes read=12907822" Sep 4 20:29:31.659064 containerd[1468]: time="2024-09-04T20:29:31.657516267Z" level=info msg="ImageCreate event name:\"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:29:31.664162 containerd[1468]: time="2024-09-04T20:29:31.664108109Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:29:31.666640 containerd[1468]: time="2024-09-04T20:29:31.666456656Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" with image id \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\", size \"14400175\" in 1.814061944s" Sep 4 20:29:31.666640 containerd[1468]: time="2024-09-04T20:29:31.666551294Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" returns image reference \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\"" Sep 4 20:29:31.673370 containerd[1468]: time="2024-09-04T20:29:31.673277890Z" level=info msg="CreateContainer within sandbox \"347cd183b6698418b0ea7f0a29fd1cd62c37f5c370cf4b70eaf87397e3e98678\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 4 20:29:31.696293 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2709583387.mount: Deactivated successfully. Sep 4 20:29:31.700819 containerd[1468]: time="2024-09-04T20:29:31.700764796Z" level=info msg="CreateContainer within sandbox \"347cd183b6698418b0ea7f0a29fd1cd62c37f5c370cf4b70eaf87397e3e98678\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"1e648becc47e796a7bc53a6011beb9245d059f795ea535a0f841c24bf05eff1c\"" Sep 4 20:29:31.702476 containerd[1468]: time="2024-09-04T20:29:31.701898680Z" level=info msg="StartContainer for \"1e648becc47e796a7bc53a6011beb9245d059f795ea535a0f841c24bf05eff1c\"" Sep 4 20:29:31.755474 systemd[1]: run-containerd-runc-k8s.io-1e648becc47e796a7bc53a6011beb9245d059f795ea535a0f841c24bf05eff1c-runc.Q7PCKS.mount: Deactivated successfully. Sep 4 20:29:31.765657 systemd[1]: Started cri-containerd-1e648becc47e796a7bc53a6011beb9245d059f795ea535a0f841c24bf05eff1c.scope - libcontainer container 1e648becc47e796a7bc53a6011beb9245d059f795ea535a0f841c24bf05eff1c. Sep 4 20:29:31.811299 containerd[1468]: time="2024-09-04T20:29:31.810865357Z" level=info msg="StartContainer for \"1e648becc47e796a7bc53a6011beb9245d059f795ea535a0f841c24bf05eff1c\" returns successfully" Sep 4 20:29:32.144122 kubelet[2535]: I0904 20:29:32.144044 2535 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-bsn66" podStartSLOduration=29.705988155 podStartE2EDuration="35.144026588s" podCreationTimestamp="2024-09-04 20:28:57 +0000 UTC" firstStartedPulling="2024-09-04 20:29:26.230981677 +0000 UTC m=+50.701550475" lastFinishedPulling="2024-09-04 20:29:31.669020102 +0000 UTC m=+56.139588908" observedRunningTime="2024-09-04 20:29:32.143781799 +0000 UTC m=+56.614350606" watchObservedRunningTime="2024-09-04 20:29:32.144026588 +0000 UTC m=+56.614595395" Sep 4 20:29:32.871014 kubelet[2535]: I0904 20:29:32.870932 2535 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 4 20:29:32.874309 kubelet[2535]: I0904 20:29:32.874264 2535 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 4 20:29:34.810469 systemd[1]: Started sshd@12-64.23.130.28:22-139.178.68.195:53264.service - OpenSSH per-connection server daemon (139.178.68.195:53264). Sep 4 20:29:34.924967 sshd[4574]: Accepted publickey for core from 139.178.68.195 port 53264 ssh2: RSA SHA256:6m86ErQYPfwi49NZRVftW/USO9k3FxgPtHd71f+HMpY Sep 4 20:29:34.927442 sshd[4574]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 20:29:34.934741 systemd-logind[1446]: New session 13 of user core. Sep 4 20:29:34.940535 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 4 20:29:35.357658 sshd[4574]: pam_unix(sshd:session): session closed for user core Sep 4 20:29:35.370623 systemd[1]: sshd@12-64.23.130.28:22-139.178.68.195:53264.service: Deactivated successfully. Sep 4 20:29:35.374089 systemd[1]: session-13.scope: Deactivated successfully. Sep 4 20:29:35.378448 systemd-logind[1446]: Session 13 logged out. Waiting for processes to exit. Sep 4 20:29:35.383381 systemd-logind[1446]: Removed session 13. Sep 4 20:29:35.688909 containerd[1468]: time="2024-09-04T20:29:35.688470166Z" level=info msg="StopPodSandbox for \"85bc2484df33423ca10209e10cfda548e62b4c2cf1825fff7e60e36b7c284d3f\"" Sep 4 20:29:35.835411 containerd[1468]: 2024-09-04 20:29:35.759 [WARNING][4605] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="85bc2484df33423ca10209e10cfda548e62b4c2cf1825fff7e60e36b7c284d3f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--0--09c0a9ae8e-k8s-coredns--7db6d8ff4d--kfqv7-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"cf38c19c-6a60-4a36-8a3a-1b512f9dcdfb", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 20, 28, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-0-09c0a9ae8e", ContainerID:"e6c728440ffadbbcc13ad89881572b784932596651904e06309a67282984ee3c", Pod:"coredns-7db6d8ff4d-kfqv7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.26.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliba970677edf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 20:29:35.835411 containerd[1468]: 2024-09-04 20:29:35.759 [INFO][4605] k8s.go 608: Cleaning up netns ContainerID="85bc2484df33423ca10209e10cfda548e62b4c2cf1825fff7e60e36b7c284d3f" Sep 4 20:29:35.835411 containerd[1468]: 2024-09-04 20:29:35.759 [INFO][4605] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="85bc2484df33423ca10209e10cfda548e62b4c2cf1825fff7e60e36b7c284d3f" iface="eth0" netns="" Sep 4 20:29:35.835411 containerd[1468]: 2024-09-04 20:29:35.759 [INFO][4605] k8s.go 615: Releasing IP address(es) ContainerID="85bc2484df33423ca10209e10cfda548e62b4c2cf1825fff7e60e36b7c284d3f" Sep 4 20:29:35.835411 containerd[1468]: 2024-09-04 20:29:35.759 [INFO][4605] utils.go 188: Calico CNI releasing IP address ContainerID="85bc2484df33423ca10209e10cfda548e62b4c2cf1825fff7e60e36b7c284d3f" Sep 4 20:29:35.835411 containerd[1468]: 2024-09-04 20:29:35.818 [INFO][4612] ipam_plugin.go 417: Releasing address using handleID ContainerID="85bc2484df33423ca10209e10cfda548e62b4c2cf1825fff7e60e36b7c284d3f" HandleID="k8s-pod-network.85bc2484df33423ca10209e10cfda548e62b4c2cf1825fff7e60e36b7c284d3f" Workload="ci--3975.2.1--0--09c0a9ae8e-k8s-coredns--7db6d8ff4d--kfqv7-eth0" Sep 4 20:29:35.835411 containerd[1468]: 2024-09-04 20:29:35.818 [INFO][4612] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 20:29:35.835411 containerd[1468]: 2024-09-04 20:29:35.818 [INFO][4612] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 20:29:35.835411 containerd[1468]: 2024-09-04 20:29:35.828 [WARNING][4612] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="85bc2484df33423ca10209e10cfda548e62b4c2cf1825fff7e60e36b7c284d3f" HandleID="k8s-pod-network.85bc2484df33423ca10209e10cfda548e62b4c2cf1825fff7e60e36b7c284d3f" Workload="ci--3975.2.1--0--09c0a9ae8e-k8s-coredns--7db6d8ff4d--kfqv7-eth0" Sep 4 20:29:35.835411 containerd[1468]: 2024-09-04 20:29:35.828 [INFO][4612] ipam_plugin.go 445: Releasing address using workloadID ContainerID="85bc2484df33423ca10209e10cfda548e62b4c2cf1825fff7e60e36b7c284d3f" HandleID="k8s-pod-network.85bc2484df33423ca10209e10cfda548e62b4c2cf1825fff7e60e36b7c284d3f" Workload="ci--3975.2.1--0--09c0a9ae8e-k8s-coredns--7db6d8ff4d--kfqv7-eth0" Sep 4 20:29:35.835411 containerd[1468]: 2024-09-04 20:29:35.830 [INFO][4612] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 20:29:35.835411 containerd[1468]: 2024-09-04 20:29:35.832 [INFO][4605] k8s.go 621: Teardown processing complete. ContainerID="85bc2484df33423ca10209e10cfda548e62b4c2cf1825fff7e60e36b7c284d3f" Sep 4 20:29:35.835411 containerd[1468]: time="2024-09-04T20:29:35.835181622Z" level=info msg="TearDown network for sandbox \"85bc2484df33423ca10209e10cfda548e62b4c2cf1825fff7e60e36b7c284d3f\" successfully" Sep 4 20:29:35.835411 containerd[1468]: time="2024-09-04T20:29:35.835218625Z" level=info msg="StopPodSandbox for \"85bc2484df33423ca10209e10cfda548e62b4c2cf1825fff7e60e36b7c284d3f\" returns successfully" Sep 4 20:29:35.843290 containerd[1468]: time="2024-09-04T20:29:35.842151285Z" level=info msg="RemovePodSandbox for \"85bc2484df33423ca10209e10cfda548e62b4c2cf1825fff7e60e36b7c284d3f\"" Sep 4 20:29:35.843290 containerd[1468]: time="2024-09-04T20:29:35.842250122Z" level=info msg="Forcibly stopping sandbox \"85bc2484df33423ca10209e10cfda548e62b4c2cf1825fff7e60e36b7c284d3f\"" Sep 4 20:29:35.955985 containerd[1468]: 2024-09-04 20:29:35.909 [WARNING][4631] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="85bc2484df33423ca10209e10cfda548e62b4c2cf1825fff7e60e36b7c284d3f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--0--09c0a9ae8e-k8s-coredns--7db6d8ff4d--kfqv7-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"cf38c19c-6a60-4a36-8a3a-1b512f9dcdfb", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 20, 28, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-0-09c0a9ae8e", ContainerID:"e6c728440ffadbbcc13ad89881572b784932596651904e06309a67282984ee3c", Pod:"coredns-7db6d8ff4d-kfqv7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.26.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliba970677edf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 20:29:35.955985 containerd[1468]: 2024-09-04 20:29:35.910 [INFO][4631] k8s.go 608: Cleaning up netns ContainerID="85bc2484df33423ca10209e10cfda548e62b4c2cf1825fff7e60e36b7c284d3f" Sep 4 20:29:35.955985 containerd[1468]: 2024-09-04 20:29:35.910 [INFO][4631] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="85bc2484df33423ca10209e10cfda548e62b4c2cf1825fff7e60e36b7c284d3f" iface="eth0" netns="" Sep 4 20:29:35.955985 containerd[1468]: 2024-09-04 20:29:35.910 [INFO][4631] k8s.go 615: Releasing IP address(es) ContainerID="85bc2484df33423ca10209e10cfda548e62b4c2cf1825fff7e60e36b7c284d3f" Sep 4 20:29:35.955985 containerd[1468]: 2024-09-04 20:29:35.910 [INFO][4631] utils.go 188: Calico CNI releasing IP address ContainerID="85bc2484df33423ca10209e10cfda548e62b4c2cf1825fff7e60e36b7c284d3f" Sep 4 20:29:35.955985 containerd[1468]: 2024-09-04 20:29:35.940 [INFO][4637] ipam_plugin.go 417: Releasing address using handleID ContainerID="85bc2484df33423ca10209e10cfda548e62b4c2cf1825fff7e60e36b7c284d3f" HandleID="k8s-pod-network.85bc2484df33423ca10209e10cfda548e62b4c2cf1825fff7e60e36b7c284d3f" Workload="ci--3975.2.1--0--09c0a9ae8e-k8s-coredns--7db6d8ff4d--kfqv7-eth0" Sep 4 20:29:35.955985 containerd[1468]: 2024-09-04 20:29:35.940 [INFO][4637] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 20:29:35.955985 containerd[1468]: 2024-09-04 20:29:35.940 [INFO][4637] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 20:29:35.955985 containerd[1468]: 2024-09-04 20:29:35.948 [WARNING][4637] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="85bc2484df33423ca10209e10cfda548e62b4c2cf1825fff7e60e36b7c284d3f" HandleID="k8s-pod-network.85bc2484df33423ca10209e10cfda548e62b4c2cf1825fff7e60e36b7c284d3f" Workload="ci--3975.2.1--0--09c0a9ae8e-k8s-coredns--7db6d8ff4d--kfqv7-eth0" Sep 4 20:29:35.955985 containerd[1468]: 2024-09-04 20:29:35.948 [INFO][4637] ipam_plugin.go 445: Releasing address using workloadID ContainerID="85bc2484df33423ca10209e10cfda548e62b4c2cf1825fff7e60e36b7c284d3f" HandleID="k8s-pod-network.85bc2484df33423ca10209e10cfda548e62b4c2cf1825fff7e60e36b7c284d3f" Workload="ci--3975.2.1--0--09c0a9ae8e-k8s-coredns--7db6d8ff4d--kfqv7-eth0" Sep 4 20:29:35.955985 containerd[1468]: 2024-09-04 20:29:35.951 [INFO][4637] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 20:29:35.955985 containerd[1468]: 2024-09-04 20:29:35.953 [INFO][4631] k8s.go 621: Teardown processing complete. ContainerID="85bc2484df33423ca10209e10cfda548e62b4c2cf1825fff7e60e36b7c284d3f" Sep 4 20:29:35.957976 containerd[1468]: time="2024-09-04T20:29:35.956055106Z" level=info msg="TearDown network for sandbox \"85bc2484df33423ca10209e10cfda548e62b4c2cf1825fff7e60e36b7c284d3f\" successfully" Sep 4 20:29:35.962132 containerd[1468]: time="2024-09-04T20:29:35.962047829Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"85bc2484df33423ca10209e10cfda548e62b4c2cf1825fff7e60e36b7c284d3f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 20:29:35.962355 containerd[1468]: time="2024-09-04T20:29:35.962189625Z" level=info msg="RemovePodSandbox \"85bc2484df33423ca10209e10cfda548e62b4c2cf1825fff7e60e36b7c284d3f\" returns successfully" Sep 4 20:29:35.962991 containerd[1468]: time="2024-09-04T20:29:35.962958566Z" level=info msg="StopPodSandbox for \"5e9fc370fba866880d30ab3cbc0dd85fc160f9693431c6ea8da9398464435ae4\"" Sep 4 20:29:36.089786 containerd[1468]: 2024-09-04 20:29:36.023 [WARNING][4656] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5e9fc370fba866880d30ab3cbc0dd85fc160f9693431c6ea8da9398464435ae4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--0--09c0a9ae8e-k8s-csi--node--driver--bsn66-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c943a141-800d-4fd7-b526-a302d60b317a", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 20, 28, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65cb9bb8f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-0-09c0a9ae8e", ContainerID:"347cd183b6698418b0ea7f0a29fd1cd62c37f5c370cf4b70eaf87397e3e98678", Pod:"csi-node-driver-bsn66", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.26.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali839545eac8d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 20:29:36.089786 containerd[1468]: 2024-09-04 20:29:36.026 [INFO][4656] k8s.go 608: Cleaning up netns ContainerID="5e9fc370fba866880d30ab3cbc0dd85fc160f9693431c6ea8da9398464435ae4" Sep 4 20:29:36.089786 containerd[1468]: 2024-09-04 20:29:36.026 [INFO][4656] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="5e9fc370fba866880d30ab3cbc0dd85fc160f9693431c6ea8da9398464435ae4" iface="eth0" netns="" Sep 4 20:29:36.089786 containerd[1468]: 2024-09-04 20:29:36.026 [INFO][4656] k8s.go 615: Releasing IP address(es) ContainerID="5e9fc370fba866880d30ab3cbc0dd85fc160f9693431c6ea8da9398464435ae4" Sep 4 20:29:36.089786 containerd[1468]: 2024-09-04 20:29:36.026 [INFO][4656] utils.go 188: Calico CNI releasing IP address ContainerID="5e9fc370fba866880d30ab3cbc0dd85fc160f9693431c6ea8da9398464435ae4" Sep 4 20:29:36.089786 containerd[1468]: 2024-09-04 20:29:36.061 [INFO][4663] ipam_plugin.go 417: Releasing address using handleID ContainerID="5e9fc370fba866880d30ab3cbc0dd85fc160f9693431c6ea8da9398464435ae4" HandleID="k8s-pod-network.5e9fc370fba866880d30ab3cbc0dd85fc160f9693431c6ea8da9398464435ae4" Workload="ci--3975.2.1--0--09c0a9ae8e-k8s-csi--node--driver--bsn66-eth0" Sep 4 20:29:36.089786 containerd[1468]: 2024-09-04 20:29:36.061 [INFO][4663] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 20:29:36.089786 containerd[1468]: 2024-09-04 20:29:36.062 [INFO][4663] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 20:29:36.089786 containerd[1468]: 2024-09-04 20:29:36.075 [WARNING][4663] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="5e9fc370fba866880d30ab3cbc0dd85fc160f9693431c6ea8da9398464435ae4" HandleID="k8s-pod-network.5e9fc370fba866880d30ab3cbc0dd85fc160f9693431c6ea8da9398464435ae4" Workload="ci--3975.2.1--0--09c0a9ae8e-k8s-csi--node--driver--bsn66-eth0" Sep 4 20:29:36.089786 containerd[1468]: 2024-09-04 20:29:36.075 [INFO][4663] ipam_plugin.go 445: Releasing address using workloadID ContainerID="5e9fc370fba866880d30ab3cbc0dd85fc160f9693431c6ea8da9398464435ae4" HandleID="k8s-pod-network.5e9fc370fba866880d30ab3cbc0dd85fc160f9693431c6ea8da9398464435ae4" Workload="ci--3975.2.1--0--09c0a9ae8e-k8s-csi--node--driver--bsn66-eth0" Sep 4 20:29:36.089786 containerd[1468]: 2024-09-04 20:29:36.083 [INFO][4663] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 20:29:36.089786 containerd[1468]: 2024-09-04 20:29:36.086 [INFO][4656] k8s.go 621: Teardown processing complete. ContainerID="5e9fc370fba866880d30ab3cbc0dd85fc160f9693431c6ea8da9398464435ae4" Sep 4 20:29:36.091333 containerd[1468]: time="2024-09-04T20:29:36.089861554Z" level=info msg="TearDown network for sandbox \"5e9fc370fba866880d30ab3cbc0dd85fc160f9693431c6ea8da9398464435ae4\" successfully" Sep 4 20:29:36.091333 containerd[1468]: time="2024-09-04T20:29:36.089920933Z" level=info msg="StopPodSandbox for \"5e9fc370fba866880d30ab3cbc0dd85fc160f9693431c6ea8da9398464435ae4\" returns successfully" Sep 4 20:29:36.091333 containerd[1468]: time="2024-09-04T20:29:36.090761291Z" level=info msg="RemovePodSandbox for \"5e9fc370fba866880d30ab3cbc0dd85fc160f9693431c6ea8da9398464435ae4\"" Sep 4 20:29:36.091333 containerd[1468]: time="2024-09-04T20:29:36.091072890Z" level=info msg="Forcibly stopping sandbox \"5e9fc370fba866880d30ab3cbc0dd85fc160f9693431c6ea8da9398464435ae4\"" Sep 4 20:29:36.210170 containerd[1468]: 2024-09-04 20:29:36.142 [WARNING][4681] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5e9fc370fba866880d30ab3cbc0dd85fc160f9693431c6ea8da9398464435ae4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--0--09c0a9ae8e-k8s-csi--node--driver--bsn66-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c943a141-800d-4fd7-b526-a302d60b317a", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 20, 28, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65cb9bb8f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-0-09c0a9ae8e", ContainerID:"347cd183b6698418b0ea7f0a29fd1cd62c37f5c370cf4b70eaf87397e3e98678", Pod:"csi-node-driver-bsn66", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.26.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali839545eac8d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 20:29:36.210170 containerd[1468]: 2024-09-04 20:29:36.142 [INFO][4681] k8s.go 608: Cleaning up netns ContainerID="5e9fc370fba866880d30ab3cbc0dd85fc160f9693431c6ea8da9398464435ae4" Sep 4 20:29:36.210170 containerd[1468]: 2024-09-04 20:29:36.142 [INFO][4681] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="5e9fc370fba866880d30ab3cbc0dd85fc160f9693431c6ea8da9398464435ae4" iface="eth0" netns="" Sep 4 20:29:36.210170 containerd[1468]: 2024-09-04 20:29:36.142 [INFO][4681] k8s.go 615: Releasing IP address(es) ContainerID="5e9fc370fba866880d30ab3cbc0dd85fc160f9693431c6ea8da9398464435ae4" Sep 4 20:29:36.210170 containerd[1468]: 2024-09-04 20:29:36.142 [INFO][4681] utils.go 188: Calico CNI releasing IP address ContainerID="5e9fc370fba866880d30ab3cbc0dd85fc160f9693431c6ea8da9398464435ae4" Sep 4 20:29:36.210170 containerd[1468]: 2024-09-04 20:29:36.190 [INFO][4689] ipam_plugin.go 417: Releasing address using handleID ContainerID="5e9fc370fba866880d30ab3cbc0dd85fc160f9693431c6ea8da9398464435ae4" HandleID="k8s-pod-network.5e9fc370fba866880d30ab3cbc0dd85fc160f9693431c6ea8da9398464435ae4" Workload="ci--3975.2.1--0--09c0a9ae8e-k8s-csi--node--driver--bsn66-eth0" Sep 4 20:29:36.210170 containerd[1468]: 2024-09-04 20:29:36.194 [INFO][4689] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 20:29:36.210170 containerd[1468]: 2024-09-04 20:29:36.194 [INFO][4689] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 20:29:36.210170 containerd[1468]: 2024-09-04 20:29:36.204 [WARNING][4689] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="5e9fc370fba866880d30ab3cbc0dd85fc160f9693431c6ea8da9398464435ae4" HandleID="k8s-pod-network.5e9fc370fba866880d30ab3cbc0dd85fc160f9693431c6ea8da9398464435ae4" Workload="ci--3975.2.1--0--09c0a9ae8e-k8s-csi--node--driver--bsn66-eth0" Sep 4 20:29:36.210170 containerd[1468]: 2024-09-04 20:29:36.204 [INFO][4689] ipam_plugin.go 445: Releasing address using workloadID ContainerID="5e9fc370fba866880d30ab3cbc0dd85fc160f9693431c6ea8da9398464435ae4" HandleID="k8s-pod-network.5e9fc370fba866880d30ab3cbc0dd85fc160f9693431c6ea8da9398464435ae4" Workload="ci--3975.2.1--0--09c0a9ae8e-k8s-csi--node--driver--bsn66-eth0" Sep 4 20:29:36.210170 containerd[1468]: 2024-09-04 20:29:36.206 [INFO][4689] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 20:29:36.210170 containerd[1468]: 2024-09-04 20:29:36.208 [INFO][4681] k8s.go 621: Teardown processing complete. ContainerID="5e9fc370fba866880d30ab3cbc0dd85fc160f9693431c6ea8da9398464435ae4" Sep 4 20:29:36.210170 containerd[1468]: time="2024-09-04T20:29:36.210141687Z" level=info msg="TearDown network for sandbox \"5e9fc370fba866880d30ab3cbc0dd85fc160f9693431c6ea8da9398464435ae4\" successfully" Sep 4 20:29:36.213815 containerd[1468]: time="2024-09-04T20:29:36.213731050Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5e9fc370fba866880d30ab3cbc0dd85fc160f9693431c6ea8da9398464435ae4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 20:29:36.214003 containerd[1468]: time="2024-09-04T20:29:36.213832149Z" level=info msg="RemovePodSandbox \"5e9fc370fba866880d30ab3cbc0dd85fc160f9693431c6ea8da9398464435ae4\" returns successfully" Sep 4 20:29:36.214488 containerd[1468]: time="2024-09-04T20:29:36.214451687Z" level=info msg="StopPodSandbox for \"d0f5dac683fe78e0ce0bca7ac04d047dc29a89cb2b5a06a87820f915c48d5f43\"" Sep 4 20:29:36.320573 containerd[1468]: 2024-09-04 20:29:36.280 [WARNING][4708] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d0f5dac683fe78e0ce0bca7ac04d047dc29a89cb2b5a06a87820f915c48d5f43" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--0--09c0a9ae8e-k8s-calico--kube--controllers--68ddb4b88b--9zbcx-eth0", GenerateName:"calico-kube-controllers-68ddb4b88b-", Namespace:"calico-system", SelfLink:"", UID:"d9d91078-829f-4383-9939-2801dd36abf2", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 20, 28, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"68ddb4b88b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-0-09c0a9ae8e", ContainerID:"9ccb3730a6ac186acd1bfb24b3d830b87d91818e883910085c6f5ce2baaea504", Pod:"calico-kube-controllers-68ddb4b88b-9zbcx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.26.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6020c14821f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 20:29:36.320573 containerd[1468]: 2024-09-04 20:29:36.280 [INFO][4708] k8s.go 608: Cleaning up netns ContainerID="d0f5dac683fe78e0ce0bca7ac04d047dc29a89cb2b5a06a87820f915c48d5f43" Sep 4 20:29:36.320573 containerd[1468]: 2024-09-04 20:29:36.280 [INFO][4708] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="d0f5dac683fe78e0ce0bca7ac04d047dc29a89cb2b5a06a87820f915c48d5f43" iface="eth0" netns="" Sep 4 20:29:36.320573 containerd[1468]: 2024-09-04 20:29:36.280 [INFO][4708] k8s.go 615: Releasing IP address(es) ContainerID="d0f5dac683fe78e0ce0bca7ac04d047dc29a89cb2b5a06a87820f915c48d5f43" Sep 4 20:29:36.320573 containerd[1468]: 2024-09-04 20:29:36.280 [INFO][4708] utils.go 188: Calico CNI releasing IP address ContainerID="d0f5dac683fe78e0ce0bca7ac04d047dc29a89cb2b5a06a87820f915c48d5f43" Sep 4 20:29:36.320573 containerd[1468]: 2024-09-04 20:29:36.306 [INFO][4714] ipam_plugin.go 417: Releasing address using handleID ContainerID="d0f5dac683fe78e0ce0bca7ac04d047dc29a89cb2b5a06a87820f915c48d5f43" HandleID="k8s-pod-network.d0f5dac683fe78e0ce0bca7ac04d047dc29a89cb2b5a06a87820f915c48d5f43" Workload="ci--3975.2.1--0--09c0a9ae8e-k8s-calico--kube--controllers--68ddb4b88b--9zbcx-eth0" Sep 4 20:29:36.320573 containerd[1468]: 2024-09-04 20:29:36.306 [INFO][4714] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 20:29:36.320573 containerd[1468]: 2024-09-04 20:29:36.307 [INFO][4714] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 20:29:36.320573 containerd[1468]: 2024-09-04 20:29:36.313 [WARNING][4714] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="d0f5dac683fe78e0ce0bca7ac04d047dc29a89cb2b5a06a87820f915c48d5f43" HandleID="k8s-pod-network.d0f5dac683fe78e0ce0bca7ac04d047dc29a89cb2b5a06a87820f915c48d5f43" Workload="ci--3975.2.1--0--09c0a9ae8e-k8s-calico--kube--controllers--68ddb4b88b--9zbcx-eth0" Sep 4 20:29:36.320573 containerd[1468]: 2024-09-04 20:29:36.313 [INFO][4714] ipam_plugin.go 445: Releasing address using workloadID ContainerID="d0f5dac683fe78e0ce0bca7ac04d047dc29a89cb2b5a06a87820f915c48d5f43" HandleID="k8s-pod-network.d0f5dac683fe78e0ce0bca7ac04d047dc29a89cb2b5a06a87820f915c48d5f43" Workload="ci--3975.2.1--0--09c0a9ae8e-k8s-calico--kube--controllers--68ddb4b88b--9zbcx-eth0" Sep 4 20:29:36.320573 containerd[1468]: 2024-09-04 20:29:36.315 [INFO][4714] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 20:29:36.320573 containerd[1468]: 2024-09-04 20:29:36.318 [INFO][4708] k8s.go 621: Teardown processing complete. ContainerID="d0f5dac683fe78e0ce0bca7ac04d047dc29a89cb2b5a06a87820f915c48d5f43" Sep 4 20:29:36.321336 containerd[1468]: time="2024-09-04T20:29:36.320636806Z" level=info msg="TearDown network for sandbox \"d0f5dac683fe78e0ce0bca7ac04d047dc29a89cb2b5a06a87820f915c48d5f43\" successfully" Sep 4 20:29:36.321336 containerd[1468]: time="2024-09-04T20:29:36.320673896Z" level=info msg="StopPodSandbox for \"d0f5dac683fe78e0ce0bca7ac04d047dc29a89cb2b5a06a87820f915c48d5f43\" returns successfully" Sep 4 20:29:36.321554 containerd[1468]: time="2024-09-04T20:29:36.321512100Z" level=info msg="RemovePodSandbox for \"d0f5dac683fe78e0ce0bca7ac04d047dc29a89cb2b5a06a87820f915c48d5f43\"" Sep 4 20:29:36.322087 containerd[1468]: time="2024-09-04T20:29:36.321660471Z" level=info msg="Forcibly stopping sandbox \"d0f5dac683fe78e0ce0bca7ac04d047dc29a89cb2b5a06a87820f915c48d5f43\"" Sep 4 20:29:36.426109 containerd[1468]: 2024-09-04 20:29:36.374 [WARNING][4732] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d0f5dac683fe78e0ce0bca7ac04d047dc29a89cb2b5a06a87820f915c48d5f43" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--0--09c0a9ae8e-k8s-calico--kube--controllers--68ddb4b88b--9zbcx-eth0", GenerateName:"calico-kube-controllers-68ddb4b88b-", Namespace:"calico-system", SelfLink:"", UID:"d9d91078-829f-4383-9939-2801dd36abf2", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 20, 28, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"68ddb4b88b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-0-09c0a9ae8e", ContainerID:"9ccb3730a6ac186acd1bfb24b3d830b87d91818e883910085c6f5ce2baaea504", Pod:"calico-kube-controllers-68ddb4b88b-9zbcx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.26.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6020c14821f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 20:29:36.426109 containerd[1468]: 2024-09-04 20:29:36.374 [INFO][4732] k8s.go 608: Cleaning up netns ContainerID="d0f5dac683fe78e0ce0bca7ac04d047dc29a89cb2b5a06a87820f915c48d5f43" Sep 4 20:29:36.426109 containerd[1468]: 2024-09-04 20:29:36.374 [INFO][4732] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="d0f5dac683fe78e0ce0bca7ac04d047dc29a89cb2b5a06a87820f915c48d5f43" iface="eth0" netns="" Sep 4 20:29:36.426109 containerd[1468]: 2024-09-04 20:29:36.374 [INFO][4732] k8s.go 615: Releasing IP address(es) ContainerID="d0f5dac683fe78e0ce0bca7ac04d047dc29a89cb2b5a06a87820f915c48d5f43" Sep 4 20:29:36.426109 containerd[1468]: 2024-09-04 20:29:36.374 [INFO][4732] utils.go 188: Calico CNI releasing IP address ContainerID="d0f5dac683fe78e0ce0bca7ac04d047dc29a89cb2b5a06a87820f915c48d5f43" Sep 4 20:29:36.426109 containerd[1468]: 2024-09-04 20:29:36.410 [INFO][4738] ipam_plugin.go 417: Releasing address using handleID ContainerID="d0f5dac683fe78e0ce0bca7ac04d047dc29a89cb2b5a06a87820f915c48d5f43" HandleID="k8s-pod-network.d0f5dac683fe78e0ce0bca7ac04d047dc29a89cb2b5a06a87820f915c48d5f43" Workload="ci--3975.2.1--0--09c0a9ae8e-k8s-calico--kube--controllers--68ddb4b88b--9zbcx-eth0" Sep 4 20:29:36.426109 containerd[1468]: 2024-09-04 20:29:36.410 [INFO][4738] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 20:29:36.426109 containerd[1468]: 2024-09-04 20:29:36.410 [INFO][4738] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 20:29:36.426109 containerd[1468]: 2024-09-04 20:29:36.418 [WARNING][4738] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="d0f5dac683fe78e0ce0bca7ac04d047dc29a89cb2b5a06a87820f915c48d5f43" HandleID="k8s-pod-network.d0f5dac683fe78e0ce0bca7ac04d047dc29a89cb2b5a06a87820f915c48d5f43" Workload="ci--3975.2.1--0--09c0a9ae8e-k8s-calico--kube--controllers--68ddb4b88b--9zbcx-eth0" Sep 4 20:29:36.426109 containerd[1468]: 2024-09-04 20:29:36.418 [INFO][4738] ipam_plugin.go 445: Releasing address using workloadID ContainerID="d0f5dac683fe78e0ce0bca7ac04d047dc29a89cb2b5a06a87820f915c48d5f43" HandleID="k8s-pod-network.d0f5dac683fe78e0ce0bca7ac04d047dc29a89cb2b5a06a87820f915c48d5f43" Workload="ci--3975.2.1--0--09c0a9ae8e-k8s-calico--kube--controllers--68ddb4b88b--9zbcx-eth0" Sep 4 20:29:36.426109 containerd[1468]: 2024-09-04 20:29:36.421 [INFO][4738] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 20:29:36.426109 containerd[1468]: 2024-09-04 20:29:36.423 [INFO][4732] k8s.go 621: Teardown processing complete. ContainerID="d0f5dac683fe78e0ce0bca7ac04d047dc29a89cb2b5a06a87820f915c48d5f43" Sep 4 20:29:36.426966 containerd[1468]: time="2024-09-04T20:29:36.426177351Z" level=info msg="TearDown network for sandbox \"d0f5dac683fe78e0ce0bca7ac04d047dc29a89cb2b5a06a87820f915c48d5f43\" successfully" Sep 4 20:29:36.429419 containerd[1468]: time="2024-09-04T20:29:36.429198948Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d0f5dac683fe78e0ce0bca7ac04d047dc29a89cb2b5a06a87820f915c48d5f43\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 20:29:36.429564 containerd[1468]: time="2024-09-04T20:29:36.429468360Z" level=info msg="RemovePodSandbox \"d0f5dac683fe78e0ce0bca7ac04d047dc29a89cb2b5a06a87820f915c48d5f43\" returns successfully" Sep 4 20:29:36.430172 containerd[1468]: time="2024-09-04T20:29:36.430090475Z" level=info msg="StopPodSandbox for \"0e880ca8912c3282bfb832b03f271edce6beab6749ee5279266d364e30b0db7c\"" Sep 4 20:29:36.541528 containerd[1468]: 2024-09-04 20:29:36.490 [WARNING][4756] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0e880ca8912c3282bfb832b03f271edce6beab6749ee5279266d364e30b0db7c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--0--09c0a9ae8e-k8s-coredns--7db6d8ff4d--lw7kj-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"b99452b1-3cd1-4b03-b372-8f4d1888f969", ResourceVersion:"789", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 20, 28, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-0-09c0a9ae8e", ContainerID:"3e8cdeee66af76e9d46d4eef7ad508e0040679b87c8f2204246edac50ed26f7c", Pod:"coredns-7db6d8ff4d-lw7kj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.26.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7c03af6ab19", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 20:29:36.541528 containerd[1468]: 2024-09-04 20:29:36.490 [INFO][4756] k8s.go 608: Cleaning up netns ContainerID="0e880ca8912c3282bfb832b03f271edce6beab6749ee5279266d364e30b0db7c" Sep 4 20:29:36.541528 containerd[1468]: 2024-09-04 20:29:36.490 [INFO][4756] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="0e880ca8912c3282bfb832b03f271edce6beab6749ee5279266d364e30b0db7c" iface="eth0" netns="" Sep 4 20:29:36.541528 containerd[1468]: 2024-09-04 20:29:36.490 [INFO][4756] k8s.go 615: Releasing IP address(es) ContainerID="0e880ca8912c3282bfb832b03f271edce6beab6749ee5279266d364e30b0db7c" Sep 4 20:29:36.541528 containerd[1468]: 2024-09-04 20:29:36.490 [INFO][4756] utils.go 188: Calico CNI releasing IP address ContainerID="0e880ca8912c3282bfb832b03f271edce6beab6749ee5279266d364e30b0db7c" Sep 4 20:29:36.541528 containerd[1468]: 2024-09-04 20:29:36.523 [INFO][4762] ipam_plugin.go 417: Releasing address using handleID ContainerID="0e880ca8912c3282bfb832b03f271edce6beab6749ee5279266d364e30b0db7c" HandleID="k8s-pod-network.0e880ca8912c3282bfb832b03f271edce6beab6749ee5279266d364e30b0db7c" Workload="ci--3975.2.1--0--09c0a9ae8e-k8s-coredns--7db6d8ff4d--lw7kj-eth0" Sep 4 20:29:36.541528 containerd[1468]: 2024-09-04 20:29:36.523 [INFO][4762] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 20:29:36.541528 containerd[1468]: 2024-09-04 20:29:36.523 [INFO][4762] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 20:29:36.541528 containerd[1468]: 2024-09-04 20:29:36.532 [WARNING][4762] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="0e880ca8912c3282bfb832b03f271edce6beab6749ee5279266d364e30b0db7c" HandleID="k8s-pod-network.0e880ca8912c3282bfb832b03f271edce6beab6749ee5279266d364e30b0db7c" Workload="ci--3975.2.1--0--09c0a9ae8e-k8s-coredns--7db6d8ff4d--lw7kj-eth0" Sep 4 20:29:36.541528 containerd[1468]: 2024-09-04 20:29:36.532 [INFO][4762] ipam_plugin.go 445: Releasing address using workloadID ContainerID="0e880ca8912c3282bfb832b03f271edce6beab6749ee5279266d364e30b0db7c" HandleID="k8s-pod-network.0e880ca8912c3282bfb832b03f271edce6beab6749ee5279266d364e30b0db7c" Workload="ci--3975.2.1--0--09c0a9ae8e-k8s-coredns--7db6d8ff4d--lw7kj-eth0" Sep 4 20:29:36.541528 containerd[1468]: 2024-09-04 20:29:36.536 [INFO][4762] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 20:29:36.541528 containerd[1468]: 2024-09-04 20:29:36.538 [INFO][4756] k8s.go 621: Teardown processing complete. ContainerID="0e880ca8912c3282bfb832b03f271edce6beab6749ee5279266d364e30b0db7c" Sep 4 20:29:36.542404 containerd[1468]: time="2024-09-04T20:29:36.541669497Z" level=info msg="TearDown network for sandbox \"0e880ca8912c3282bfb832b03f271edce6beab6749ee5279266d364e30b0db7c\" successfully" Sep 4 20:29:36.542404 containerd[1468]: time="2024-09-04T20:29:36.541709769Z" level=info msg="StopPodSandbox for \"0e880ca8912c3282bfb832b03f271edce6beab6749ee5279266d364e30b0db7c\" returns successfully" Sep 4 20:29:36.544417 containerd[1468]: time="2024-09-04T20:29:36.544361471Z" level=info msg="RemovePodSandbox for \"0e880ca8912c3282bfb832b03f271edce6beab6749ee5279266d364e30b0db7c\"" Sep 4 20:29:36.544417 containerd[1468]: time="2024-09-04T20:29:36.544417286Z" level=info msg="Forcibly stopping sandbox \"0e880ca8912c3282bfb832b03f271edce6beab6749ee5279266d364e30b0db7c\"" Sep 4 20:29:36.639139 containerd[1468]: 2024-09-04 20:29:36.596 [WARNING][4781] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0e880ca8912c3282bfb832b03f271edce6beab6749ee5279266d364e30b0db7c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--0--09c0a9ae8e-k8s-coredns--7db6d8ff4d--lw7kj-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"b99452b1-3cd1-4b03-b372-8f4d1888f969", ResourceVersion:"789", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 20, 28, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-0-09c0a9ae8e", ContainerID:"3e8cdeee66af76e9d46d4eef7ad508e0040679b87c8f2204246edac50ed26f7c", Pod:"coredns-7db6d8ff4d-lw7kj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.26.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7c03af6ab19", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 20:29:36.639139 containerd[1468]: 2024-09-04 20:29:36.597 [INFO][4781] k8s.go 608: Cleaning up netns ContainerID="0e880ca8912c3282bfb832b03f271edce6beab6749ee5279266d364e30b0db7c" Sep 4 20:29:36.639139 containerd[1468]: 2024-09-04 20:29:36.597 [INFO][4781] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="0e880ca8912c3282bfb832b03f271edce6beab6749ee5279266d364e30b0db7c" iface="eth0" netns="" Sep 4 20:29:36.639139 containerd[1468]: 2024-09-04 20:29:36.597 [INFO][4781] k8s.go 615: Releasing IP address(es) ContainerID="0e880ca8912c3282bfb832b03f271edce6beab6749ee5279266d364e30b0db7c" Sep 4 20:29:36.639139 containerd[1468]: 2024-09-04 20:29:36.597 [INFO][4781] utils.go 188: Calico CNI releasing IP address ContainerID="0e880ca8912c3282bfb832b03f271edce6beab6749ee5279266d364e30b0db7c" Sep 4 20:29:36.639139 containerd[1468]: 2024-09-04 20:29:36.625 [INFO][4788] ipam_plugin.go 417: Releasing address using handleID ContainerID="0e880ca8912c3282bfb832b03f271edce6beab6749ee5279266d364e30b0db7c" HandleID="k8s-pod-network.0e880ca8912c3282bfb832b03f271edce6beab6749ee5279266d364e30b0db7c" Workload="ci--3975.2.1--0--09c0a9ae8e-k8s-coredns--7db6d8ff4d--lw7kj-eth0" Sep 4 20:29:36.639139 containerd[1468]: 2024-09-04 20:29:36.625 [INFO][4788] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 20:29:36.639139 containerd[1468]: 2024-09-04 20:29:36.625 [INFO][4788] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 20:29:36.639139 containerd[1468]: 2024-09-04 20:29:36.632 [WARNING][4788] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="0e880ca8912c3282bfb832b03f271edce6beab6749ee5279266d364e30b0db7c" HandleID="k8s-pod-network.0e880ca8912c3282bfb832b03f271edce6beab6749ee5279266d364e30b0db7c" Workload="ci--3975.2.1--0--09c0a9ae8e-k8s-coredns--7db6d8ff4d--lw7kj-eth0" Sep 4 20:29:36.639139 containerd[1468]: 2024-09-04 20:29:36.632 [INFO][4788] ipam_plugin.go 445: Releasing address using workloadID ContainerID="0e880ca8912c3282bfb832b03f271edce6beab6749ee5279266d364e30b0db7c" HandleID="k8s-pod-network.0e880ca8912c3282bfb832b03f271edce6beab6749ee5279266d364e30b0db7c" Workload="ci--3975.2.1--0--09c0a9ae8e-k8s-coredns--7db6d8ff4d--lw7kj-eth0" Sep 4 20:29:36.639139 containerd[1468]: 2024-09-04 20:29:36.634 [INFO][4788] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 20:29:36.639139 containerd[1468]: 2024-09-04 20:29:36.636 [INFO][4781] k8s.go 621: Teardown processing complete. ContainerID="0e880ca8912c3282bfb832b03f271edce6beab6749ee5279266d364e30b0db7c" Sep 4 20:29:36.640269 containerd[1468]: time="2024-09-04T20:29:36.639177285Z" level=info msg="TearDown network for sandbox \"0e880ca8912c3282bfb832b03f271edce6beab6749ee5279266d364e30b0db7c\" successfully" Sep 4 20:29:36.646130 containerd[1468]: time="2024-09-04T20:29:36.646075770Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0e880ca8912c3282bfb832b03f271edce6beab6749ee5279266d364e30b0db7c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 20:29:36.646536 containerd[1468]: time="2024-09-04T20:29:36.646165088Z" level=info msg="RemovePodSandbox \"0e880ca8912c3282bfb832b03f271edce6beab6749ee5279266d364e30b0db7c\" returns successfully" Sep 4 20:29:40.376676 systemd[1]: Started sshd@13-64.23.130.28:22-139.178.68.195:51208.service - OpenSSH per-connection server daemon (139.178.68.195:51208). Sep 4 20:29:40.455065 sshd[4820]: Accepted publickey for core from 139.178.68.195 port 51208 ssh2: RSA SHA256:6m86ErQYPfwi49NZRVftW/USO9k3FxgPtHd71f+HMpY Sep 4 20:29:40.457792 sshd[4820]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 20:29:40.463273 systemd-logind[1446]: New session 14 of user core. Sep 4 20:29:40.474598 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 4 20:29:40.677632 sshd[4820]: pam_unix(sshd:session): session closed for user core Sep 4 20:29:40.683979 systemd[1]: sshd@13-64.23.130.28:22-139.178.68.195:51208.service: Deactivated successfully. Sep 4 20:29:40.687828 systemd[1]: session-14.scope: Deactivated successfully. Sep 4 20:29:40.689743 systemd-logind[1446]: Session 14 logged out. Waiting for processes to exit. Sep 4 20:29:40.691441 systemd-logind[1446]: Removed session 14. Sep 4 20:29:45.694585 systemd[1]: Started sshd@14-64.23.130.28:22-139.178.68.195:51220.service - OpenSSH per-connection server daemon (139.178.68.195:51220). Sep 4 20:29:45.755382 sshd[4858]: Accepted publickey for core from 139.178.68.195 port 51220 ssh2: RSA SHA256:6m86ErQYPfwi49NZRVftW/USO9k3FxgPtHd71f+HMpY Sep 4 20:29:45.757375 sshd[4858]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 20:29:45.762903 systemd-logind[1446]: New session 15 of user core. Sep 4 20:29:45.769633 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 4 20:29:45.919999 sshd[4858]: pam_unix(sshd:session): session closed for user core Sep 4 20:29:45.925441 systemd[1]: sshd@14-64.23.130.28:22-139.178.68.195:51220.service: Deactivated successfully. Sep 4 20:29:45.928702 systemd[1]: session-15.scope: Deactivated successfully. Sep 4 20:29:45.929738 systemd-logind[1446]: Session 15 logged out. Waiting for processes to exit. Sep 4 20:29:45.930889 systemd-logind[1446]: Removed session 15. Sep 4 20:29:50.657040 kubelet[2535]: E0904 20:29:50.656937 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 4 20:29:50.942477 systemd[1]: Started sshd@15-64.23.130.28:22-139.178.68.195:44668.service - OpenSSH per-connection server daemon (139.178.68.195:44668). Sep 4 20:29:51.002010 sshd[4879]: Accepted publickey for core from 139.178.68.195 port 44668 ssh2: RSA SHA256:6m86ErQYPfwi49NZRVftW/USO9k3FxgPtHd71f+HMpY Sep 4 20:29:51.003974 sshd[4879]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 20:29:51.011810 systemd-logind[1446]: New session 16 of user core. Sep 4 20:29:51.017952 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 4 20:29:51.212837 sshd[4879]: pam_unix(sshd:session): session closed for user core Sep 4 20:29:51.225302 systemd[1]: sshd@15-64.23.130.28:22-139.178.68.195:44668.service: Deactivated successfully. Sep 4 20:29:51.229575 systemd[1]: session-16.scope: Deactivated successfully. Sep 4 20:29:51.234001 systemd-logind[1446]: Session 16 logged out. Waiting for processes to exit. Sep 4 20:29:51.237797 systemd[1]: Started sshd@16-64.23.130.28:22-139.178.68.195:44678.service - OpenSSH per-connection server daemon (139.178.68.195:44678). Sep 4 20:29:51.240503 systemd-logind[1446]: Removed session 16. Sep 4 20:29:51.325977 sshd[4906]: Accepted publickey for core from 139.178.68.195 port 44678 ssh2: RSA SHA256:6m86ErQYPfwi49NZRVftW/USO9k3FxgPtHd71f+HMpY Sep 4 20:29:51.328443 sshd[4906]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 20:29:51.337257 systemd-logind[1446]: New session 17 of user core. Sep 4 20:29:51.343669 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 4 20:29:51.681322 sshd[4906]: pam_unix(sshd:session): session closed for user core Sep 4 20:29:51.690554 systemd[1]: sshd@16-64.23.130.28:22-139.178.68.195:44678.service: Deactivated successfully. Sep 4 20:29:51.693052 systemd[1]: session-17.scope: Deactivated successfully. Sep 4 20:29:51.693998 systemd-logind[1446]: Session 17 logged out. Waiting for processes to exit. Sep 4 20:29:51.702794 systemd[1]: Started sshd@17-64.23.130.28:22-139.178.68.195:44690.service - OpenSSH per-connection server daemon (139.178.68.195:44690). Sep 4 20:29:51.704733 systemd-logind[1446]: Removed session 17. Sep 4 20:29:51.766153 sshd[4922]: Accepted publickey for core from 139.178.68.195 port 44690 ssh2: RSA SHA256:6m86ErQYPfwi49NZRVftW/USO9k3FxgPtHd71f+HMpY Sep 4 20:29:51.768341 sshd[4922]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 20:29:51.773019 systemd-logind[1446]: New session 18 of user core. Sep 4 20:29:51.781520 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 4 20:29:54.090939 sshd[4922]: pam_unix(sshd:session): session closed for user core Sep 4 20:29:54.122794 systemd[1]: Started sshd@18-64.23.130.28:22-139.178.68.195:44704.service - OpenSSH per-connection server daemon (139.178.68.195:44704). Sep 4 20:29:54.124433 systemd[1]: sshd@17-64.23.130.28:22-139.178.68.195:44690.service: Deactivated successfully. Sep 4 20:29:54.128942 systemd[1]: session-18.scope: Deactivated successfully. Sep 4 20:29:54.135444 systemd-logind[1446]: Session 18 logged out. Waiting for processes to exit. Sep 4 20:29:54.138675 systemd-logind[1446]: Removed session 18. Sep 4 20:29:54.224279 sshd[4940]: Accepted publickey for core from 139.178.68.195 port 44704 ssh2: RSA SHA256:6m86ErQYPfwi49NZRVftW/USO9k3FxgPtHd71f+HMpY Sep 4 20:29:54.225447 sshd[4940]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 20:29:54.234312 systemd-logind[1446]: New session 19 of user core. Sep 4 20:29:54.237673 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 4 20:29:54.790670 sshd[4940]: pam_unix(sshd:session): session closed for user core Sep 4 20:29:54.806019 systemd[1]: sshd@18-64.23.130.28:22-139.178.68.195:44704.service: Deactivated successfully. Sep 4 20:29:54.810530 systemd[1]: session-19.scope: Deactivated successfully. Sep 4 20:29:54.814558 systemd-logind[1446]: Session 19 logged out. Waiting for processes to exit. Sep 4 20:29:54.822469 systemd[1]: Started sshd@19-64.23.130.28:22-139.178.68.195:44708.service - OpenSSH per-connection server daemon (139.178.68.195:44708). Sep 4 20:29:54.824371 systemd-logind[1446]: Removed session 19. Sep 4 20:29:54.874541 sshd[4955]: Accepted publickey for core from 139.178.68.195 port 44708 ssh2: RSA SHA256:6m86ErQYPfwi49NZRVftW/USO9k3FxgPtHd71f+HMpY Sep 4 20:29:54.877875 sshd[4955]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 20:29:54.884386 systemd-logind[1446]: New session 20 of user core. Sep 4 20:29:54.890498 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 4 20:29:55.045523 sshd[4955]: pam_unix(sshd:session): session closed for user core Sep 4 20:29:55.051977 systemd[1]: sshd@19-64.23.130.28:22-139.178.68.195:44708.service: Deactivated successfully. Sep 4 20:29:55.056907 systemd[1]: session-20.scope: Deactivated successfully. Sep 4 20:29:55.058699 systemd-logind[1446]: Session 20 logged out. Waiting for processes to exit. Sep 4 20:29:55.061149 systemd-logind[1446]: Removed session 20. Sep 4 20:29:58.657004 kubelet[2535]: E0904 20:29:58.656895 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 4 20:30:00.068671 systemd[1]: Started sshd@20-64.23.130.28:22-139.178.68.195:44350.service - OpenSSH per-connection server daemon (139.178.68.195:44350). Sep 4 20:30:00.162778 sshd[4970]: Accepted publickey for core from 139.178.68.195 port 44350 ssh2: RSA SHA256:6m86ErQYPfwi49NZRVftW/USO9k3FxgPtHd71f+HMpY Sep 4 20:30:00.164555 sshd[4970]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 20:30:00.174782 systemd-logind[1446]: New session 21 of user core. Sep 4 20:30:00.183501 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 4 20:30:00.488799 sshd[4970]: pam_unix(sshd:session): session closed for user core Sep 4 20:30:00.498464 systemd-logind[1446]: Session 21 logged out. Waiting for processes to exit. Sep 4 20:30:00.500932 systemd[1]: sshd@20-64.23.130.28:22-139.178.68.195:44350.service: Deactivated successfully. Sep 4 20:30:00.510643 systemd[1]: session-21.scope: Deactivated successfully. Sep 4 20:30:00.514562 systemd-logind[1446]: Removed session 21. Sep 4 20:30:02.484040 kubelet[2535]: I0904 20:30:02.477381 2535 topology_manager.go:215] "Topology Admit Handler" podUID="47c1acfa-9201-41e7-8508-1dbd1d8c8b71" podNamespace="calico-apiserver" podName="calico-apiserver-5cd5dc647c-fwjgq" Sep 4 20:30:02.547508 systemd[1]: Created slice kubepods-besteffort-pod47c1acfa_9201_41e7_8508_1dbd1d8c8b71.slice - libcontainer container kubepods-besteffort-pod47c1acfa_9201_41e7_8508_1dbd1d8c8b71.slice. Sep 4 20:30:02.579692 kubelet[2535]: I0904 20:30:02.579138 2535 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/47c1acfa-9201-41e7-8508-1dbd1d8c8b71-calico-apiserver-certs\") pod \"calico-apiserver-5cd5dc647c-fwjgq\" (UID: \"47c1acfa-9201-41e7-8508-1dbd1d8c8b71\") " pod="calico-apiserver/calico-apiserver-5cd5dc647c-fwjgq" Sep 4 20:30:02.579998 kubelet[2535]: I0904 20:30:02.579779 2535 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzkkm\" (UniqueName: \"kubernetes.io/projected/47c1acfa-9201-41e7-8508-1dbd1d8c8b71-kube-api-access-fzkkm\") pod \"calico-apiserver-5cd5dc647c-fwjgq\" (UID: \"47c1acfa-9201-41e7-8508-1dbd1d8c8b71\") " pod="calico-apiserver/calico-apiserver-5cd5dc647c-fwjgq" Sep 4 20:30:02.692401 kubelet[2535]: E0904 20:30:02.692301 2535 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Sep 4 20:30:02.715480 kubelet[2535]: E0904 20:30:02.715380 2535 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/47c1acfa-9201-41e7-8508-1dbd1d8c8b71-calico-apiserver-certs podName:47c1acfa-9201-41e7-8508-1dbd1d8c8b71 nodeName:}" failed. No retries permitted until 2024-09-04 20:30:03.19614756 +0000 UTC m=+87.666716347 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/47c1acfa-9201-41e7-8508-1dbd1d8c8b71-calico-apiserver-certs") pod "calico-apiserver-5cd5dc647c-fwjgq" (UID: "47c1acfa-9201-41e7-8508-1dbd1d8c8b71") : secret "calico-apiserver-certs" not found Sep 4 20:30:03.457836 containerd[1468]: time="2024-09-04T20:30:03.457764386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cd5dc647c-fwjgq,Uid:47c1acfa-9201-41e7-8508-1dbd1d8c8b71,Namespace:calico-apiserver,Attempt:0,}" Sep 4 20:30:03.791513 systemd-networkd[1369]: cali2edb7f5d662: Link UP Sep 4 20:30:03.795780 systemd-networkd[1369]: cali2edb7f5d662: Gained carrier Sep 4 20:30:03.825567 containerd[1468]: 2024-09-04 20:30:03.626 [INFO][5004] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.2.1--0--09c0a9ae8e-k8s-calico--apiserver--5cd5dc647c--fwjgq-eth0 calico-apiserver-5cd5dc647c- calico-apiserver 47c1acfa-9201-41e7-8508-1dbd1d8c8b71 1101 0 2024-09-04 20:30:02 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5cd5dc647c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3975.2.1-0-09c0a9ae8e calico-apiserver-5cd5dc647c-fwjgq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2edb7f5d662 [] []}} ContainerID="a0aed73b37103e3084edd0ef3c84192b7b919c8f7550822b43a86018b67eaeff" Namespace="calico-apiserver" Pod="calico-apiserver-5cd5dc647c-fwjgq" WorkloadEndpoint="ci--3975.2.1--0--09c0a9ae8e-k8s-calico--apiserver--5cd5dc647c--fwjgq-" Sep 4 20:30:03.825567 containerd[1468]: 2024-09-04 20:30:03.627 [INFO][5004] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a0aed73b37103e3084edd0ef3c84192b7b919c8f7550822b43a86018b67eaeff" Namespace="calico-apiserver" Pod="calico-apiserver-5cd5dc647c-fwjgq" WorkloadEndpoint="ci--3975.2.1--0--09c0a9ae8e-k8s-calico--apiserver--5cd5dc647c--fwjgq-eth0" Sep 4 20:30:03.825567 containerd[1468]: 2024-09-04 20:30:03.690 [INFO][5014] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a0aed73b37103e3084edd0ef3c84192b7b919c8f7550822b43a86018b67eaeff" HandleID="k8s-pod-network.a0aed73b37103e3084edd0ef3c84192b7b919c8f7550822b43a86018b67eaeff" Workload="ci--3975.2.1--0--09c0a9ae8e-k8s-calico--apiserver--5cd5dc647c--fwjgq-eth0" Sep 4 20:30:03.825567 containerd[1468]: 2024-09-04 20:30:03.708 [INFO][5014] ipam_plugin.go 270: Auto assigning IP ContainerID="a0aed73b37103e3084edd0ef3c84192b7b919c8f7550822b43a86018b67eaeff" HandleID="k8s-pod-network.a0aed73b37103e3084edd0ef3c84192b7b919c8f7550822b43a86018b67eaeff" Workload="ci--3975.2.1--0--09c0a9ae8e-k8s-calico--apiserver--5cd5dc647c--fwjgq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290050), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3975.2.1-0-09c0a9ae8e", "pod":"calico-apiserver-5cd5dc647c-fwjgq", "timestamp":"2024-09-04 20:30:03.690866011 +0000 UTC"}, Hostname:"ci-3975.2.1-0-09c0a9ae8e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 20:30:03.825567 containerd[1468]: 2024-09-04 20:30:03.708 [INFO][5014] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 20:30:03.825567 containerd[1468]: 2024-09-04 20:30:03.708 [INFO][5014] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 20:30:03.825567 containerd[1468]: 2024-09-04 20:30:03.708 [INFO][5014] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.2.1-0-09c0a9ae8e' Sep 4 20:30:03.825567 containerd[1468]: 2024-09-04 20:30:03.713 [INFO][5014] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a0aed73b37103e3084edd0ef3c84192b7b919c8f7550822b43a86018b67eaeff" host="ci-3975.2.1-0-09c0a9ae8e" Sep 4 20:30:03.825567 containerd[1468]: 2024-09-04 20:30:03.726 [INFO][5014] ipam.go 372: Looking up existing affinities for host host="ci-3975.2.1-0-09c0a9ae8e" Sep 4 20:30:03.825567 containerd[1468]: 2024-09-04 20:30:03.736 [INFO][5014] ipam.go 489: Trying affinity for 192.168.26.192/26 host="ci-3975.2.1-0-09c0a9ae8e" Sep 4 20:30:03.825567 containerd[1468]: 2024-09-04 20:30:03.741 [INFO][5014] ipam.go 155: Attempting to load block cidr=192.168.26.192/26 host="ci-3975.2.1-0-09c0a9ae8e" Sep 4 20:30:03.825567 containerd[1468]: 2024-09-04 20:30:03.747 [INFO][5014] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.26.192/26 host="ci-3975.2.1-0-09c0a9ae8e" Sep 4 20:30:03.825567 containerd[1468]: 2024-09-04 20:30:03.747 [INFO][5014] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.26.192/26 handle="k8s-pod-network.a0aed73b37103e3084edd0ef3c84192b7b919c8f7550822b43a86018b67eaeff" host="ci-3975.2.1-0-09c0a9ae8e" Sep 4 20:30:03.825567 containerd[1468]: 2024-09-04 20:30:03.752 [INFO][5014] ipam.go 1685: Creating new handle: k8s-pod-network.a0aed73b37103e3084edd0ef3c84192b7b919c8f7550822b43a86018b67eaeff Sep 4 20:30:03.825567 containerd[1468]: 2024-09-04 20:30:03.759 [INFO][5014] ipam.go 1203: Writing block in order to claim IPs block=192.168.26.192/26 handle="k8s-pod-network.a0aed73b37103e3084edd0ef3c84192b7b919c8f7550822b43a86018b67eaeff" host="ci-3975.2.1-0-09c0a9ae8e" Sep 4 20:30:03.825567 containerd[1468]: 2024-09-04 20:30:03.769 [INFO][5014] ipam.go 1216: Successfully claimed IPs: [192.168.26.197/26] block=192.168.26.192/26 handle="k8s-pod-network.a0aed73b37103e3084edd0ef3c84192b7b919c8f7550822b43a86018b67eaeff" host="ci-3975.2.1-0-09c0a9ae8e" Sep 4 20:30:03.825567 containerd[1468]: 2024-09-04 20:30:03.769 [INFO][5014] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.26.197/26] handle="k8s-pod-network.a0aed73b37103e3084edd0ef3c84192b7b919c8f7550822b43a86018b67eaeff" host="ci-3975.2.1-0-09c0a9ae8e" Sep 4 20:30:03.825567 containerd[1468]: 2024-09-04 20:30:03.769 [INFO][5014] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 20:30:03.825567 containerd[1468]: 2024-09-04 20:30:03.769 [INFO][5014] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.26.197/26] IPv6=[] ContainerID="a0aed73b37103e3084edd0ef3c84192b7b919c8f7550822b43a86018b67eaeff" HandleID="k8s-pod-network.a0aed73b37103e3084edd0ef3c84192b7b919c8f7550822b43a86018b67eaeff" Workload="ci--3975.2.1--0--09c0a9ae8e-k8s-calico--apiserver--5cd5dc647c--fwjgq-eth0" Sep 4 20:30:03.832362 containerd[1468]: 2024-09-04 20:30:03.775 [INFO][5004] k8s.go 386: Populated endpoint ContainerID="a0aed73b37103e3084edd0ef3c84192b7b919c8f7550822b43a86018b67eaeff" Namespace="calico-apiserver" Pod="calico-apiserver-5cd5dc647c-fwjgq" WorkloadEndpoint="ci--3975.2.1--0--09c0a9ae8e-k8s-calico--apiserver--5cd5dc647c--fwjgq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--0--09c0a9ae8e-k8s-calico--apiserver--5cd5dc647c--fwjgq-eth0", GenerateName:"calico-apiserver-5cd5dc647c-", Namespace:"calico-apiserver", SelfLink:"", UID:"47c1acfa-9201-41e7-8508-1dbd1d8c8b71", ResourceVersion:"1101", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 20, 30, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5cd5dc647c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-0-09c0a9ae8e", ContainerID:"", Pod:"calico-apiserver-5cd5dc647c-fwjgq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.26.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2edb7f5d662", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 20:30:03.832362 containerd[1468]: 2024-09-04 20:30:03.779 [INFO][5004] k8s.go 387: Calico CNI using IPs: [192.168.26.197/32] ContainerID="a0aed73b37103e3084edd0ef3c84192b7b919c8f7550822b43a86018b67eaeff" Namespace="calico-apiserver" Pod="calico-apiserver-5cd5dc647c-fwjgq" WorkloadEndpoint="ci--3975.2.1--0--09c0a9ae8e-k8s-calico--apiserver--5cd5dc647c--fwjgq-eth0" Sep 4 20:30:03.832362 containerd[1468]: 2024-09-04 20:30:03.779 [INFO][5004] dataplane_linux.go 68: Setting the host side veth name to cali2edb7f5d662 ContainerID="a0aed73b37103e3084edd0ef3c84192b7b919c8f7550822b43a86018b67eaeff" Namespace="calico-apiserver" Pod="calico-apiserver-5cd5dc647c-fwjgq" WorkloadEndpoint="ci--3975.2.1--0--09c0a9ae8e-k8s-calico--apiserver--5cd5dc647c--fwjgq-eth0" Sep 4 20:30:03.832362 containerd[1468]: 2024-09-04 20:30:03.797 [INFO][5004] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="a0aed73b37103e3084edd0ef3c84192b7b919c8f7550822b43a86018b67eaeff" Namespace="calico-apiserver" Pod="calico-apiserver-5cd5dc647c-fwjgq" WorkloadEndpoint="ci--3975.2.1--0--09c0a9ae8e-k8s-calico--apiserver--5cd5dc647c--fwjgq-eth0" Sep 4 20:30:03.832362 containerd[1468]: 2024-09-04 20:30:03.799 [INFO][5004] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a0aed73b37103e3084edd0ef3c84192b7b919c8f7550822b43a86018b67eaeff" Namespace="calico-apiserver" Pod="calico-apiserver-5cd5dc647c-fwjgq" WorkloadEndpoint="ci--3975.2.1--0--09c0a9ae8e-k8s-calico--apiserver--5cd5dc647c--fwjgq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--0--09c0a9ae8e-k8s-calico--apiserver--5cd5dc647c--fwjgq-eth0", GenerateName:"calico-apiserver-5cd5dc647c-", Namespace:"calico-apiserver", SelfLink:"", UID:"47c1acfa-9201-41e7-8508-1dbd1d8c8b71", ResourceVersion:"1101", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 20, 30, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5cd5dc647c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-0-09c0a9ae8e", ContainerID:"a0aed73b37103e3084edd0ef3c84192b7b919c8f7550822b43a86018b67eaeff", Pod:"calico-apiserver-5cd5dc647c-fwjgq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.26.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2edb7f5d662", MAC:"36:84:c4:fd:2e:fe", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 20:30:03.832362 containerd[1468]: 2024-09-04 20:30:03.819 [INFO][5004] k8s.go 500: Wrote updated endpoint to datastore ContainerID="a0aed73b37103e3084edd0ef3c84192b7b919c8f7550822b43a86018b67eaeff" Namespace="calico-apiserver" Pod="calico-apiserver-5cd5dc647c-fwjgq" WorkloadEndpoint="ci--3975.2.1--0--09c0a9ae8e-k8s-calico--apiserver--5cd5dc647c--fwjgq-eth0" Sep 4 20:30:03.916665 containerd[1468]: time="2024-09-04T20:30:03.914597318Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 20:30:03.916665 containerd[1468]: time="2024-09-04T20:30:03.914835259Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 20:30:03.916665 containerd[1468]: time="2024-09-04T20:30:03.914928390Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 20:30:03.916665 containerd[1468]: time="2024-09-04T20:30:03.914963660Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 20:30:03.975055 systemd[1]: Started cri-containerd-a0aed73b37103e3084edd0ef3c84192b7b919c8f7550822b43a86018b67eaeff.scope - libcontainer container a0aed73b37103e3084edd0ef3c84192b7b919c8f7550822b43a86018b67eaeff. Sep 4 20:30:04.082852 containerd[1468]: time="2024-09-04T20:30:04.082566914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cd5dc647c-fwjgq,Uid:47c1acfa-9201-41e7-8508-1dbd1d8c8b71,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a0aed73b37103e3084edd0ef3c84192b7b919c8f7550822b43a86018b67eaeff\"" Sep 4 20:30:04.102043 containerd[1468]: time="2024-09-04T20:30:04.101517554Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Sep 4 20:30:04.657360 kubelet[2535]: E0904 20:30:04.657132 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 4 20:30:04.893822 systemd-networkd[1369]: cali2edb7f5d662: Gained IPv6LL Sep 4 20:30:05.510933 systemd[1]: Started sshd@21-64.23.130.28:22-139.178.68.195:44352.service - OpenSSH per-connection server daemon (139.178.68.195:44352). Sep 4 20:30:05.651634 sshd[5079]: Accepted publickey for core from 139.178.68.195 port 44352 ssh2: RSA SHA256:6m86ErQYPfwi49NZRVftW/USO9k3FxgPtHd71f+HMpY Sep 4 20:30:05.655170 sshd[5079]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 20:30:05.677399 systemd-logind[1446]: New session 22 of user core. Sep 4 20:30:05.685747 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 4 20:30:06.328927 sshd[5079]: pam_unix(sshd:session): session closed for user core Sep 4 20:30:06.336060 systemd[1]: sshd@21-64.23.130.28:22-139.178.68.195:44352.service: Deactivated successfully. Sep 4 20:30:06.343785 systemd[1]: session-22.scope: Deactivated successfully. Sep 4 20:30:06.348583 systemd-logind[1446]: Session 22 logged out. Waiting for processes to exit. Sep 4 20:30:06.353601 systemd-logind[1446]: Removed session 22. Sep 4 20:30:08.552577 containerd[1468]: time="2024-09-04T20:30:08.552266349Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:30:08.554076 containerd[1468]: time="2024-09-04T20:30:08.553817507Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=40419849" Sep 4 20:30:08.555272 containerd[1468]: time="2024-09-04T20:30:08.554737685Z" level=info msg="ImageCreate event name:\"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:30:08.558572 containerd[1468]: time="2024-09-04T20:30:08.558501619Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 20:30:08.559991 containerd[1468]: time="2024-09-04T20:30:08.559849694Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"41912266\" in 4.458231311s" Sep 4 20:30:08.560347 containerd[1468]: time="2024-09-04T20:30:08.560187377Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\"" Sep 4 20:30:08.568859 containerd[1468]: time="2024-09-04T20:30:08.568622427Z" level=info msg="CreateContainer within sandbox \"a0aed73b37103e3084edd0ef3c84192b7b919c8f7550822b43a86018b67eaeff\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 4 20:30:08.597667 containerd[1468]: time="2024-09-04T20:30:08.596650767Z" level=info msg="CreateContainer within sandbox \"a0aed73b37103e3084edd0ef3c84192b7b919c8f7550822b43a86018b67eaeff\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"4aadb4f81a5394988afa0036f3e46047455a9df1e4ed7af504ee787c0af6172e\"" Sep 4 20:30:08.599836 containerd[1468]: time="2024-09-04T20:30:08.598910355Z" level=info msg="StartContainer for \"4aadb4f81a5394988afa0036f3e46047455a9df1e4ed7af504ee787c0af6172e\"" Sep 4 20:30:08.720562 systemd[1]: Started cri-containerd-4aadb4f81a5394988afa0036f3e46047455a9df1e4ed7af504ee787c0af6172e.scope - libcontainer container 4aadb4f81a5394988afa0036f3e46047455a9df1e4ed7af504ee787c0af6172e. Sep 4 20:30:08.829567 containerd[1468]: time="2024-09-04T20:30:08.827199158Z" level=info msg="StartContainer for \"4aadb4f81a5394988afa0036f3e46047455a9df1e4ed7af504ee787c0af6172e\" returns successfully" Sep 4 20:30:10.020400 kubelet[2535]: I0904 20:30:10.019286 2535 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5cd5dc647c-fwjgq" podStartSLOduration=3.557762918 podStartE2EDuration="8.0192226s" podCreationTimestamp="2024-09-04 20:30:02 +0000 UTC" firstStartedPulling="2024-09-04 20:30:04.100991614 +0000 UTC m=+88.571560496" lastFinishedPulling="2024-09-04 20:30:08.562451366 +0000 UTC m=+93.033020178" observedRunningTime="2024-09-04 20:30:09.308516624 +0000 UTC m=+93.779085432" watchObservedRunningTime="2024-09-04 20:30:10.0192226 +0000 UTC m=+94.489791432" Sep 4 20:30:11.359311 systemd[1]: Started sshd@22-64.23.130.28:22-139.178.68.195:49386.service - OpenSSH per-connection server daemon (139.178.68.195:49386). Sep 4 20:30:11.501310 sshd[5168]: Accepted publickey for core from 139.178.68.195 port 49386 ssh2: RSA SHA256:6m86ErQYPfwi49NZRVftW/USO9k3FxgPtHd71f+HMpY Sep 4 20:30:11.503422 sshd[5168]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 20:30:11.516285 systemd-logind[1446]: New session 23 of user core. Sep 4 20:30:11.524633 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 4 20:30:12.315668 sshd[5168]: pam_unix(sshd:session): session closed for user core Sep 4 20:30:12.324312 systemd[1]: sshd@22-64.23.130.28:22-139.178.68.195:49386.service: Deactivated successfully. Sep 4 20:30:12.329186 systemd[1]: session-23.scope: Deactivated successfully. Sep 4 20:30:12.330781 systemd-logind[1446]: Session 23 logged out. Waiting for processes to exit. Sep 4 20:30:12.333293 systemd-logind[1446]: Removed session 23. Sep 4 20:30:17.344792 systemd[1]: Started sshd@23-64.23.130.28:22-139.178.68.195:46372.service - OpenSSH per-connection server daemon (139.178.68.195:46372). Sep 4 20:30:17.394293 sshd[5192]: Accepted publickey for core from 139.178.68.195 port 46372 ssh2: RSA SHA256:6m86ErQYPfwi49NZRVftW/USO9k3FxgPtHd71f+HMpY Sep 4 20:30:17.397027 sshd[5192]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 20:30:17.405962 systemd-logind[1446]: New session 24 of user core. Sep 4 20:30:17.414651 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 4 20:30:17.586931 sshd[5192]: pam_unix(sshd:session): session closed for user core Sep 4 20:30:17.592127 systemd-logind[1446]: Session 24 logged out. Waiting for processes to exit. Sep 4 20:30:17.595252 systemd[1]: sshd@23-64.23.130.28:22-139.178.68.195:46372.service: Deactivated successfully. Sep 4 20:30:17.599737 systemd[1]: session-24.scope: Deactivated successfully. Sep 4 20:30:17.602363 systemd-logind[1446]: Removed session 24. Sep 4 20:30:17.658023 kubelet[2535]: E0904 20:30:17.657571 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 4 20:30:20.657312 kubelet[2535]: E0904 20:30:20.657252 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 4 20:30:22.608806 systemd[1]: Started sshd@24-64.23.130.28:22-139.178.68.195:46382.service - OpenSSH per-connection server daemon (139.178.68.195:46382). Sep 4 20:30:22.682127 sshd[5234]: Accepted publickey for core from 139.178.68.195 port 46382 ssh2: RSA SHA256:6m86ErQYPfwi49NZRVftW/USO9k3FxgPtHd71f+HMpY Sep 4 20:30:22.692949 sshd[5234]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 20:30:22.702548 systemd-logind[1446]: New session 25 of user core. Sep 4 20:30:22.708660 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 4 20:30:23.017504 sshd[5234]: pam_unix(sshd:session): session closed for user core Sep 4 20:30:23.023684 systemd-logind[1446]: Session 25 logged out. Waiting for processes to exit. Sep 4 20:30:23.024580 systemd[1]: sshd@24-64.23.130.28:22-139.178.68.195:46382.service: Deactivated successfully. Sep 4 20:30:23.032000 systemd[1]: session-25.scope: Deactivated successfully. Sep 4 20:30:23.035758 systemd-logind[1446]: Removed session 25. Sep 4 20:30:27.656415 kubelet[2535]: E0904 20:30:27.656308 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 4 20:30:28.041691 systemd[1]: Started sshd@25-64.23.130.28:22-139.178.68.195:37190.service - OpenSSH per-connection server daemon (139.178.68.195:37190). Sep 4 20:30:28.106171 sshd[5248]: Accepted publickey for core from 139.178.68.195 port 37190 ssh2: RSA SHA256:6m86ErQYPfwi49NZRVftW/USO9k3FxgPtHd71f+HMpY Sep 4 20:30:28.108157 sshd[5248]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 20:30:28.114713 systemd-logind[1446]: New session 26 of user core. Sep 4 20:30:28.121827 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 4 20:30:28.183753 systemd[1]: Started sshd@26-64.23.130.28:22-80.66.76.133:9658.service - OpenSSH per-connection server daemon (80.66.76.133:9658). Sep 4 20:30:28.203471 sshd[5252]: banner exchange: Connection from 80.66.76.133 port 9658: invalid format Sep 4 20:30:28.205212 systemd[1]: sshd@26-64.23.130.28:22-80.66.76.133:9658.service: Deactivated successfully. Sep 4 20:30:28.289268 sshd[5248]: pam_unix(sshd:session): session closed for user core Sep 4 20:30:28.295896 systemd[1]: sshd@25-64.23.130.28:22-139.178.68.195:37190.service: Deactivated successfully. Sep 4 20:30:28.300195 systemd[1]: session-26.scope: Deactivated successfully. Sep 4 20:30:28.302691 systemd-logind[1446]: Session 26 logged out. Waiting for processes to exit. Sep 4 20:30:28.304281 systemd-logind[1446]: Removed session 26. Sep 4 20:30:28.573588 systemd[1]: Started sshd@27-64.23.130.28:22-80.66.76.133:12158.service - OpenSSH per-connection server daemon (80.66.76.133:12158). Sep 4 20:30:28.608326 sshd[5265]: banner exchange: Connection from 80.66.76.133 port 12158: invalid format Sep 4 20:30:28.609603 systemd[1]: sshd@27-64.23.130.28:22-80.66.76.133:12158.service: Deactivated successfully. Sep 4 20:30:28.970840 systemd[1]: Started sshd@28-64.23.130.28:22-80.66.76.133:15197.service - OpenSSH per-connection server daemon (80.66.76.133:15197). Sep 4 20:30:28.995596 sshd[5269]: banner exchange: Connection from 80.66.76.133 port 15197: invalid format Sep 4 20:30:28.998347 systemd[1]: sshd@28-64.23.130.28:22-80.66.76.133:15197.service: Deactivated successfully.