Oct 9 01:00:12.064902 kernel: Linux version 6.6.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Oct 8 23:33:43 -00 2024 Oct 9 01:00:12.064948 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=ecc53326196a1bacd9ba781ce772ef34cdd5fe5561cf830307501ec3d5ba168a Oct 9 01:00:12.064966 kernel: BIOS-provided physical RAM map: Oct 9 01:00:12.064976 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Oct 9 01:00:12.064985 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Oct 9 01:00:12.064994 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Oct 9 01:00:12.065006 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffd7fff] usable Oct 9 01:00:12.065016 kernel: BIOS-e820: [mem 0x000000007ffd8000-0x000000007fffffff] reserved Oct 9 01:00:12.065026 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 9 01:00:12.065041 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Oct 9 01:00:12.065051 kernel: NX (Execute Disable) protection: active Oct 9 01:00:12.065062 kernel: APIC: Static calls initialized Oct 9 01:00:12.065072 kernel: SMBIOS 2.8 present. Oct 9 01:00:12.065083 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Oct 9 01:00:12.065097 kernel: Hypervisor detected: KVM Oct 9 01:00:12.065112 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 9 01:00:12.065124 kernel: kvm-clock: using sched offset of 5191947523 cycles Oct 9 01:00:12.065138 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 9 01:00:12.065151 kernel: tsc: Detected 2494.138 MHz processor Oct 9 01:00:12.065165 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 9 01:00:12.065179 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 9 01:00:12.065192 kernel: last_pfn = 0x7ffd8 max_arch_pfn = 0x400000000 Oct 9 01:00:12.065206 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Oct 9 01:00:12.065221 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 9 01:00:12.065238 kernel: ACPI: Early table checksum verification disabled Oct 9 01:00:12.065250 kernel: ACPI: RSDP 0x00000000000F5A50 000014 (v00 BOCHS ) Oct 9 01:00:12.065262 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 01:00:12.065276 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 01:00:12.065290 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 01:00:12.065302 kernel: ACPI: FACS 0x000000007FFE0000 000040 Oct 9 01:00:12.065314 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 01:00:12.065327 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 01:00:12.065342 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 01:00:12.065360 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 01:00:12.065375 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Oct 9 01:00:12.065389 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Oct 9 01:00:12.065403 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Oct 9 01:00:12.065416 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Oct 9 01:00:12.065429 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Oct 9 01:00:12.068553 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Oct 9 01:00:12.068585 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Oct 9 01:00:12.068603 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Oct 9 01:00:12.068618 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Oct 9 01:00:12.068633 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Oct 9 01:00:12.068648 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Oct 9 01:00:12.068663 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffd7fff] -> [mem 0x00000000-0x7ffd7fff] Oct 9 01:00:12.068675 kernel: NODE_DATA(0) allocated [mem 0x7ffd2000-0x7ffd7fff] Oct 9 01:00:12.068691 kernel: Zone ranges: Oct 9 01:00:12.068703 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 9 01:00:12.068715 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffd7fff] Oct 9 01:00:12.068727 kernel: Normal empty Oct 9 01:00:12.068738 kernel: Movable zone start for each node Oct 9 01:00:12.068749 kernel: Early memory node ranges Oct 9 01:00:12.068761 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Oct 9 01:00:12.068773 kernel: node 0: [mem 0x0000000000100000-0x000000007ffd7fff] Oct 9 01:00:12.068785 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffd7fff] Oct 9 01:00:12.068802 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 9 01:00:12.068816 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Oct 9 01:00:12.068828 kernel: On node 0, zone DMA32: 40 pages in unavailable ranges Oct 9 01:00:12.068842 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 9 01:00:12.068857 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 9 01:00:12.068872 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 9 01:00:12.068888 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 9 01:00:12.068903 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 9 01:00:12.068919 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 9 01:00:12.068939 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 9 01:00:12.068955 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 9 01:00:12.068970 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 9 01:00:12.068986 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 9 01:00:12.069001 kernel: TSC deadline timer available Oct 9 01:00:12.069038 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Oct 9 01:00:12.069050 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Oct 9 01:00:12.069063 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Oct 9 01:00:12.069078 kernel: Booting paravirtualized kernel on KVM Oct 9 01:00:12.069098 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 9 01:00:12.069113 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Oct 9 01:00:12.069128 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576 Oct 9 01:00:12.069143 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152 Oct 9 01:00:12.069157 kernel: pcpu-alloc: [0] 0 1 Oct 9 01:00:12.069172 kernel: kvm-guest: PV spinlocks disabled, no host support Oct 9 01:00:12.069189 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=ecc53326196a1bacd9ba781ce772ef34cdd5fe5561cf830307501ec3d5ba168a Oct 9 01:00:12.069203 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 9 01:00:12.069221 kernel: random: crng init done Oct 9 01:00:12.069234 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 9 01:00:12.069247 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Oct 9 01:00:12.069261 kernel: Fallback order for Node 0: 0 Oct 9 01:00:12.069276 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515800 Oct 9 01:00:12.069291 kernel: Policy zone: DMA32 Oct 9 01:00:12.069306 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 9 01:00:12.069320 kernel: Memory: 1971192K/2096600K available (12288K kernel code, 2305K rwdata, 22728K rodata, 42872K init, 2316K bss, 125148K reserved, 0K cma-reserved) Oct 9 01:00:12.069334 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Oct 9 01:00:12.069353 kernel: Kernel/User page tables isolation: enabled Oct 9 01:00:12.069366 kernel: ftrace: allocating 37786 entries in 148 pages Oct 9 01:00:12.069378 kernel: ftrace: allocated 148 pages with 3 groups Oct 9 01:00:12.069389 kernel: Dynamic Preempt: voluntary Oct 9 01:00:12.069401 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 9 01:00:12.069417 kernel: rcu: RCU event tracing is enabled. Oct 9 01:00:12.069488 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Oct 9 01:00:12.069504 kernel: Trampoline variant of Tasks RCU enabled. Oct 9 01:00:12.069516 kernel: Rude variant of Tasks RCU enabled. Oct 9 01:00:12.069537 kernel: Tracing variant of Tasks RCU enabled. Oct 9 01:00:12.069550 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 9 01:00:12.069563 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Oct 9 01:00:12.069575 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Oct 9 01:00:12.069587 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 9 01:00:12.069599 kernel: Console: colour VGA+ 80x25 Oct 9 01:00:12.069611 kernel: printk: console [tty0] enabled Oct 9 01:00:12.069623 kernel: printk: console [ttyS0] enabled Oct 9 01:00:12.069635 kernel: ACPI: Core revision 20230628 Oct 9 01:00:12.069648 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Oct 9 01:00:12.069667 kernel: APIC: Switch to symmetric I/O mode setup Oct 9 01:00:12.069679 kernel: x2apic enabled Oct 9 01:00:12.069691 kernel: APIC: Switched APIC routing to: physical x2apic Oct 9 01:00:12.069704 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 9 01:00:12.069716 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39838d43, max_idle_ns: 440795267131 ns Oct 9 01:00:12.069729 kernel: Calibrating delay loop (skipped) preset value.. 4988.27 BogoMIPS (lpj=2494138) Oct 9 01:00:12.069742 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Oct 9 01:00:12.069757 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Oct 9 01:00:12.069788 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 9 01:00:12.069801 kernel: Spectre V2 : Mitigation: Retpolines Oct 9 01:00:12.069816 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Oct 9 01:00:12.069833 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Oct 9 01:00:12.069847 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Oct 9 01:00:12.069861 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 9 01:00:12.069874 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 9 01:00:12.069888 kernel: MDS: Mitigation: Clear CPU buffers Oct 9 01:00:12.069903 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Oct 9 01:00:12.069923 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 9 01:00:12.069938 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 9 01:00:12.069954 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 9 01:00:12.069970 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 9 01:00:12.069986 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Oct 9 01:00:12.069998 kernel: Freeing SMP alternatives memory: 32K Oct 9 01:00:12.070012 kernel: pid_max: default: 32768 minimum: 301 Oct 9 01:00:12.070025 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Oct 9 01:00:12.070043 kernel: landlock: Up and running. Oct 9 01:00:12.070055 kernel: SELinux: Initializing. Oct 9 01:00:12.070067 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 9 01:00:12.070080 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 9 01:00:12.070117 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Oct 9 01:00:12.070134 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 9 01:00:12.070150 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 9 01:00:12.070167 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 9 01:00:12.070184 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Oct 9 01:00:12.070201 kernel: signal: max sigframe size: 1776 Oct 9 01:00:12.070214 kernel: rcu: Hierarchical SRCU implementation. Oct 9 01:00:12.070227 kernel: rcu: Max phase no-delay instances is 400. Oct 9 01:00:12.070240 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Oct 9 01:00:12.070253 kernel: smp: Bringing up secondary CPUs ... Oct 9 01:00:12.070268 kernel: smpboot: x86: Booting SMP configuration: Oct 9 01:00:12.070282 kernel: .... node #0, CPUs: #1 Oct 9 01:00:12.070295 kernel: smp: Brought up 1 node, 2 CPUs Oct 9 01:00:12.070308 kernel: smpboot: Max logical packages: 1 Oct 9 01:00:12.070326 kernel: smpboot: Total of 2 processors activated (9976.55 BogoMIPS) Oct 9 01:00:12.070341 kernel: devtmpfs: initialized Oct 9 01:00:12.070353 kernel: x86/mm: Memory block size: 128MB Oct 9 01:00:12.070367 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 9 01:00:12.070380 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Oct 9 01:00:12.070393 kernel: pinctrl core: initialized pinctrl subsystem Oct 9 01:00:12.070406 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 9 01:00:12.070419 kernel: audit: initializing netlink subsys (disabled) Oct 9 01:00:12.072546 kernel: audit: type=2000 audit(1728435610.280:1): state=initialized audit_enabled=0 res=1 Oct 9 01:00:12.072603 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 9 01:00:12.072629 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 9 01:00:12.072643 kernel: cpuidle: using governor menu Oct 9 01:00:12.072657 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 9 01:00:12.072671 kernel: dca service started, version 1.12.1 Oct 9 01:00:12.072684 kernel: PCI: Using configuration type 1 for base access Oct 9 01:00:12.072699 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 9 01:00:12.072713 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 9 01:00:12.072727 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 9 01:00:12.072748 kernel: ACPI: Added _OSI(Module Device) Oct 9 01:00:12.072763 kernel: ACPI: Added _OSI(Processor Device) Oct 9 01:00:12.072778 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 9 01:00:12.072791 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 9 01:00:12.072804 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 9 01:00:12.072818 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Oct 9 01:00:12.072831 kernel: ACPI: Interpreter enabled Oct 9 01:00:12.072843 kernel: ACPI: PM: (supports S0 S5) Oct 9 01:00:12.072856 kernel: ACPI: Using IOAPIC for interrupt routing Oct 9 01:00:12.072874 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 9 01:00:12.072887 kernel: PCI: Using E820 reservations for host bridge windows Oct 9 01:00:12.072900 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Oct 9 01:00:12.072913 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 9 01:00:12.073266 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Oct 9 01:00:12.074638 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Oct 9 01:00:12.074870 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Oct 9 01:00:12.074905 kernel: acpiphp: Slot [3] registered Oct 9 01:00:12.074923 kernel: acpiphp: Slot [4] registered Oct 9 01:00:12.074939 kernel: acpiphp: Slot [5] registered Oct 9 01:00:12.074954 kernel: acpiphp: Slot [6] registered Oct 9 01:00:12.074970 kernel: acpiphp: Slot [7] registered Oct 9 01:00:12.074987 kernel: acpiphp: Slot [8] registered Oct 9 01:00:12.075004 kernel: acpiphp: Slot [9] registered Oct 9 01:00:12.075022 kernel: acpiphp: Slot [10] registered Oct 9 01:00:12.075039 kernel: acpiphp: Slot [11] registered Oct 9 01:00:12.075056 kernel: acpiphp: Slot [12] registered Oct 9 01:00:12.075077 kernel: acpiphp: Slot [13] registered Oct 9 01:00:12.075093 kernel: acpiphp: Slot [14] registered Oct 9 01:00:12.075110 kernel: acpiphp: Slot [15] registered Oct 9 01:00:12.075127 kernel: acpiphp: Slot [16] registered Oct 9 01:00:12.075152 kernel: acpiphp: Slot [17] registered Oct 9 01:00:12.075199 kernel: acpiphp: Slot [18] registered Oct 9 01:00:12.075216 kernel: acpiphp: Slot [19] registered Oct 9 01:00:12.075232 kernel: acpiphp: Slot [20] registered Oct 9 01:00:12.075249 kernel: acpiphp: Slot [21] registered Oct 9 01:00:12.075271 kernel: acpiphp: Slot [22] registered Oct 9 01:00:12.075287 kernel: acpiphp: Slot [23] registered Oct 9 01:00:12.075304 kernel: acpiphp: Slot [24] registered Oct 9 01:00:12.075321 kernel: acpiphp: Slot [25] registered Oct 9 01:00:12.075337 kernel: acpiphp: Slot [26] registered Oct 9 01:00:12.075350 kernel: acpiphp: Slot [27] registered Oct 9 01:00:12.075366 kernel: acpiphp: Slot [28] registered Oct 9 01:00:12.075383 kernel: acpiphp: Slot [29] registered Oct 9 01:00:12.075399 kernel: acpiphp: Slot [30] registered Oct 9 01:00:12.075414 kernel: acpiphp: Slot [31] registered Oct 9 01:00:12.077721 kernel: PCI host bridge to bus 0000:00 Oct 9 01:00:12.077997 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 9 01:00:12.078163 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 9 01:00:12.078305 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 9 01:00:12.078472 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Oct 9 01:00:12.078624 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Oct 9 01:00:12.078757 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 9 01:00:12.078970 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Oct 9 01:00:12.079166 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Oct 9 01:00:12.081489 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Oct 9 01:00:12.081784 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Oct 9 01:00:12.082000 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Oct 9 01:00:12.082203 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Oct 9 01:00:12.082358 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Oct 9 01:00:12.082633 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Oct 9 01:00:12.082823 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Oct 9 01:00:12.082991 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Oct 9 01:00:12.083177 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Oct 9 01:00:12.083343 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Oct 9 01:00:12.084359 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Oct 9 01:00:12.084593 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Oct 9 01:00:12.084772 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Oct 9 01:00:12.084932 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Oct 9 01:00:12.085098 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Oct 9 01:00:12.085258 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Oct 9 01:00:12.085424 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 9 01:00:12.085651 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Oct 9 01:00:12.085832 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Oct 9 01:00:12.085988 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Oct 9 01:00:12.086175 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Oct 9 01:00:12.086375 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Oct 9 01:00:12.089745 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Oct 9 01:00:12.089967 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Oct 9 01:00:12.090228 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Oct 9 01:00:12.090483 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Oct 9 01:00:12.090653 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Oct 9 01:00:12.090812 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Oct 9 01:00:12.090970 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Oct 9 01:00:12.091156 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Oct 9 01:00:12.091316 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Oct 9 01:00:12.092601 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Oct 9 01:00:12.092808 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Oct 9 01:00:12.092999 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Oct 9 01:00:12.093161 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Oct 9 01:00:12.093320 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Oct 9 01:00:12.093504 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Oct 9 01:00:12.093683 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Oct 9 01:00:12.093858 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Oct 9 01:00:12.094039 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Oct 9 01:00:12.094063 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 9 01:00:12.094111 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 9 01:00:12.094126 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 9 01:00:12.094140 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 9 01:00:12.094154 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Oct 9 01:00:12.094167 kernel: iommu: Default domain type: Translated Oct 9 01:00:12.094183 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 9 01:00:12.094204 kernel: PCI: Using ACPI for IRQ routing Oct 9 01:00:12.094218 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 9 01:00:12.094233 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Oct 9 01:00:12.094248 kernel: e820: reserve RAM buffer [mem 0x7ffd8000-0x7fffffff] Oct 9 01:00:12.094920 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Oct 9 01:00:12.095152 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Oct 9 01:00:12.095374 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 9 01:00:12.095398 kernel: vgaarb: loaded Oct 9 01:00:12.095424 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Oct 9 01:00:12.095476 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Oct 9 01:00:12.095492 kernel: clocksource: Switched to clocksource kvm-clock Oct 9 01:00:12.095507 kernel: VFS: Disk quotas dquot_6.6.0 Oct 9 01:00:12.095522 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 9 01:00:12.095538 kernel: pnp: PnP ACPI init Oct 9 01:00:12.095555 kernel: pnp: PnP ACPI: found 4 devices Oct 9 01:00:12.095573 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 9 01:00:12.095590 kernel: NET: Registered PF_INET protocol family Oct 9 01:00:12.095613 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 9 01:00:12.095631 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Oct 9 01:00:12.095648 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 9 01:00:12.095664 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Oct 9 01:00:12.095679 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Oct 9 01:00:12.095694 kernel: TCP: Hash tables configured (established 16384 bind 16384) Oct 9 01:00:12.095711 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 9 01:00:12.095727 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 9 01:00:12.095743 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 9 01:00:12.095764 kernel: NET: Registered PF_XDP protocol family Oct 9 01:00:12.095947 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 9 01:00:12.096095 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 9 01:00:12.096232 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 9 01:00:12.096369 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Oct 9 01:00:12.096633 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Oct 9 01:00:12.096809 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Oct 9 01:00:12.096979 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Oct 9 01:00:12.097013 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Oct 9 01:00:12.097178 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7a0 took 38940 usecs Oct 9 01:00:12.097200 kernel: PCI: CLS 0 bytes, default 64 Oct 9 01:00:12.097214 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Oct 9 01:00:12.097229 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39838d43, max_idle_ns: 440795267131 ns Oct 9 01:00:12.097245 kernel: Initialise system trusted keyrings Oct 9 01:00:12.097261 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Oct 9 01:00:12.097278 kernel: Key type asymmetric registered Oct 9 01:00:12.097294 kernel: Asymmetric key parser 'x509' registered Oct 9 01:00:12.097318 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Oct 9 01:00:12.097334 kernel: io scheduler mq-deadline registered Oct 9 01:00:12.097350 kernel: io scheduler kyber registered Oct 9 01:00:12.097367 kernel: io scheduler bfq registered Oct 9 01:00:12.097383 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 9 01:00:12.097402 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Oct 9 01:00:12.097419 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Oct 9 01:00:12.097589 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Oct 9 01:00:12.097604 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 9 01:00:12.097625 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 9 01:00:12.097639 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 9 01:00:12.097653 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 9 01:00:12.097665 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 9 01:00:12.097679 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 9 01:00:12.097978 kernel: rtc_cmos 00:03: RTC can wake from S4 Oct 9 01:00:12.098156 kernel: rtc_cmos 00:03: registered as rtc0 Oct 9 01:00:12.098303 kernel: rtc_cmos 00:03: setting system clock to 2024-10-09T01:00:11 UTC (1728435611) Oct 9 01:00:12.098478 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Oct 9 01:00:12.098500 kernel: intel_pstate: CPU model not supported Oct 9 01:00:12.098515 kernel: NET: Registered PF_INET6 protocol family Oct 9 01:00:12.098529 kernel: Segment Routing with IPv6 Oct 9 01:00:12.098542 kernel: In-situ OAM (IOAM) with IPv6 Oct 9 01:00:12.098555 kernel: NET: Registered PF_PACKET protocol family Oct 9 01:00:12.098568 kernel: Key type dns_resolver registered Oct 9 01:00:12.098582 kernel: IPI shorthand broadcast: enabled Oct 9 01:00:12.098598 kernel: sched_clock: Marking stable (1203006126, 129543699)->(1444348080, -111798255) Oct 9 01:00:12.098621 kernel: registered taskstats version 1 Oct 9 01:00:12.098637 kernel: Loading compiled-in X.509 certificates Oct 9 01:00:12.098650 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.54-flatcar: 03ae66f5ce294ce3ab718ee0d7c4a4a6e8c5aae6' Oct 9 01:00:12.098663 kernel: Key type .fscrypt registered Oct 9 01:00:12.098676 kernel: Key type fscrypt-provisioning registered Oct 9 01:00:12.098690 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 9 01:00:12.098706 kernel: ima: Allocated hash algorithm: sha1 Oct 9 01:00:12.098722 kernel: ima: No architecture policies found Oct 9 01:00:12.098742 kernel: clk: Disabling unused clocks Oct 9 01:00:12.098759 kernel: Freeing unused kernel image (initmem) memory: 42872K Oct 9 01:00:12.098772 kernel: Write protecting the kernel read-only data: 36864k Oct 9 01:00:12.098812 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Oct 9 01:00:12.098829 kernel: Run /init as init process Oct 9 01:00:12.098843 kernel: with arguments: Oct 9 01:00:12.098858 kernel: /init Oct 9 01:00:12.098871 kernel: with environment: Oct 9 01:00:12.098888 kernel: HOME=/ Oct 9 01:00:12.098910 kernel: TERM=linux Oct 9 01:00:12.098927 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 9 01:00:12.098949 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 9 01:00:12.098968 systemd[1]: Detected virtualization kvm. Oct 9 01:00:12.098984 systemd[1]: Detected architecture x86-64. Oct 9 01:00:12.099000 systemd[1]: Running in initrd. Oct 9 01:00:12.099016 systemd[1]: No hostname configured, using default hostname. Oct 9 01:00:12.099032 systemd[1]: Hostname set to . Oct 9 01:00:12.099052 systemd[1]: Initializing machine ID from VM UUID. Oct 9 01:00:12.099069 systemd[1]: Queued start job for default target initrd.target. Oct 9 01:00:12.099085 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 01:00:12.099102 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 01:00:12.099122 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 9 01:00:12.099140 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 9 01:00:12.099157 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 9 01:00:12.099174 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 9 01:00:12.099195 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 9 01:00:12.099211 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 9 01:00:12.099228 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 01:00:12.099245 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 9 01:00:12.099263 systemd[1]: Reached target paths.target - Path Units. Oct 9 01:00:12.099279 systemd[1]: Reached target slices.target - Slice Units. Oct 9 01:00:12.099298 systemd[1]: Reached target swap.target - Swaps. Oct 9 01:00:12.099319 systemd[1]: Reached target timers.target - Timer Units. Oct 9 01:00:12.099337 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 9 01:00:12.099354 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 9 01:00:12.099372 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 9 01:00:12.099390 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 9 01:00:12.099412 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 9 01:00:12.101494 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 9 01:00:12.101557 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 01:00:12.101574 systemd[1]: Reached target sockets.target - Socket Units. Oct 9 01:00:12.101590 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 9 01:00:12.101606 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 9 01:00:12.101625 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 9 01:00:12.101643 systemd[1]: Starting systemd-fsck-usr.service... Oct 9 01:00:12.101669 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 9 01:00:12.101684 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 9 01:00:12.101699 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 01:00:12.101716 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 9 01:00:12.101733 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 01:00:12.101802 systemd-journald[182]: Collecting audit messages is disabled. Oct 9 01:00:12.101847 systemd[1]: Finished systemd-fsck-usr.service. Oct 9 01:00:12.101865 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 9 01:00:12.101885 systemd-journald[182]: Journal started Oct 9 01:00:12.101925 systemd-journald[182]: Runtime Journal (/run/log/journal/d9e99ab95ae04f5a997256ace3dacd89) is 4.9M, max 39.3M, 34.4M free. Oct 9 01:00:12.103566 systemd[1]: Started systemd-journald.service - Journal Service. Oct 9 01:00:12.061039 systemd-modules-load[183]: Inserted module 'overlay' Oct 9 01:00:12.155477 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 9 01:00:12.155516 kernel: Bridge firewalling registered Oct 9 01:00:12.127233 systemd-modules-load[183]: Inserted module 'br_netfilter' Oct 9 01:00:12.156517 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 9 01:00:12.158241 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 01:00:12.163166 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 9 01:00:12.171714 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 01:00:12.173656 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 9 01:00:12.176279 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 9 01:00:12.183747 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 9 01:00:12.213895 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 01:00:12.222976 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 9 01:00:12.223937 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 9 01:00:12.227040 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 01:00:12.233923 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 9 01:00:12.243836 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 9 01:00:12.296946 dracut-cmdline[216]: dracut-dracut-053 Oct 9 01:00:12.300893 systemd-resolved[217]: Positive Trust Anchors: Oct 9 01:00:12.300924 systemd-resolved[217]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 9 01:00:12.300981 systemd-resolved[217]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 9 01:00:12.308001 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=ecc53326196a1bacd9ba781ce772ef34cdd5fe5561cf830307501ec3d5ba168a Oct 9 01:00:12.307414 systemd-resolved[217]: Defaulting to hostname 'linux'. Oct 9 01:00:12.311267 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 9 01:00:12.312995 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 9 01:00:12.486534 kernel: SCSI subsystem initialized Oct 9 01:00:12.498468 kernel: Loading iSCSI transport class v2.0-870. Oct 9 01:00:12.517497 kernel: iscsi: registered transport (tcp) Oct 9 01:00:12.545570 kernel: iscsi: registered transport (qla4xxx) Oct 9 01:00:12.545668 kernel: QLogic iSCSI HBA Driver Oct 9 01:00:12.615690 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 9 01:00:12.624755 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 9 01:00:12.662537 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 9 01:00:12.662631 kernel: device-mapper: uevent: version 1.0.3 Oct 9 01:00:12.664524 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Oct 9 01:00:12.714495 kernel: raid6: avx2x4 gen() 14853 MB/s Oct 9 01:00:12.731474 kernel: raid6: avx2x2 gen() 15316 MB/s Oct 9 01:00:12.748695 kernel: raid6: avx2x1 gen() 11083 MB/s Oct 9 01:00:12.748810 kernel: raid6: using algorithm avx2x2 gen() 15316 MB/s Oct 9 01:00:12.767509 kernel: raid6: .... xor() 17520 MB/s, rmw enabled Oct 9 01:00:12.767593 kernel: raid6: using avx2x2 recovery algorithm Oct 9 01:00:12.795474 kernel: xor: automatically using best checksumming function avx Oct 9 01:00:13.017635 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 9 01:00:13.035006 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 9 01:00:13.043833 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 01:00:13.080286 systemd-udevd[400]: Using default interface naming scheme 'v255'. Oct 9 01:00:13.088733 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 01:00:13.098766 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 9 01:00:13.128249 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Oct 9 01:00:13.177258 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 9 01:00:13.184788 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 9 01:00:13.269936 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 01:00:13.279079 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 9 01:00:13.319142 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 9 01:00:13.321525 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 9 01:00:13.322374 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 01:00:13.323798 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 9 01:00:13.328680 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 9 01:00:13.363337 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 9 01:00:13.385101 kernel: scsi host0: Virtio SCSI HBA Oct 9 01:00:13.396962 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Oct 9 01:00:13.397307 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Oct 9 01:00:13.429491 kernel: libata version 3.00 loaded. Oct 9 01:00:13.436598 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 9 01:00:13.436693 kernel: GPT:9289727 != 125829119 Oct 9 01:00:13.436715 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 9 01:00:13.436734 kernel: GPT:9289727 != 125829119 Oct 9 01:00:13.436753 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 9 01:00:13.436770 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 01:00:13.436806 kernel: ata_piix 0000:00:01.1: version 2.13 Oct 9 01:00:13.439466 kernel: scsi host1: ata_piix Oct 9 01:00:13.442461 kernel: cryptd: max_cpu_qlen set to 1000 Oct 9 01:00:13.442526 kernel: scsi host2: ata_piix Oct 9 01:00:13.443605 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Oct 9 01:00:13.444624 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Oct 9 01:00:13.452482 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Oct 9 01:00:13.464595 kernel: virtio_blk virtio5: [vdb] 968 512-byte logical blocks (496 kB/484 KiB) Oct 9 01:00:13.465537 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 9 01:00:13.465752 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 01:00:13.467582 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 01:00:13.467983 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 01:00:13.468172 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 01:00:13.472243 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 01:00:13.479465 kernel: ACPI: bus type USB registered Oct 9 01:00:13.480481 kernel: usbcore: registered new interface driver usbfs Oct 9 01:00:13.481485 kernel: usbcore: registered new interface driver hub Oct 9 01:00:13.481924 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 01:00:13.483873 kernel: usbcore: registered new device driver usb Oct 9 01:00:13.545946 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 01:00:13.551747 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 01:00:13.587613 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 01:00:13.620410 kernel: AVX2 version of gcm_enc/dec engaged. Oct 9 01:00:13.620504 kernel: AES CTR mode by8 optimization enabled Oct 9 01:00:13.640458 kernel: BTRFS: device fsid 6ed52ce5-b2f8-4d16-8889-677a209bc377 devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (452) Oct 9 01:00:13.655485 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (460) Oct 9 01:00:13.665391 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 9 01:00:13.674609 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 9 01:00:13.690509 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Oct 9 01:00:13.692732 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 9 01:00:13.701493 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Oct 9 01:00:13.701810 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Oct 9 01:00:13.701911 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 9 01:00:13.704893 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Oct 9 01:00:13.705095 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Oct 9 01:00:13.708700 kernel: hub 1-0:1.0: USB hub found Oct 9 01:00:13.708989 kernel: hub 1-0:1.0: 2 ports detected Oct 9 01:00:13.710033 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 9 01:00:13.720989 disk-uuid[548]: Primary Header is updated. Oct 9 01:00:13.720989 disk-uuid[548]: Secondary Entries is updated. Oct 9 01:00:13.720989 disk-uuid[548]: Secondary Header is updated. Oct 9 01:00:13.737627 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 01:00:13.744491 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 01:00:13.753479 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 01:00:14.754470 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 01:00:14.754554 disk-uuid[549]: The operation has completed successfully. Oct 9 01:00:14.805021 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 9 01:00:14.805154 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 9 01:00:14.822815 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 9 01:00:14.827778 sh[562]: Success Oct 9 01:00:14.843742 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Oct 9 01:00:14.919047 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 9 01:00:14.928576 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 9 01:00:14.933793 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 9 01:00:14.961497 kernel: BTRFS info (device dm-0): first mount of filesystem 6ed52ce5-b2f8-4d16-8889-677a209bc377 Oct 9 01:00:14.961593 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 9 01:00:14.961608 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Oct 9 01:00:14.963678 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 9 01:00:14.963758 kernel: BTRFS info (device dm-0): using free space tree Oct 9 01:00:14.972160 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 9 01:00:14.973398 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 9 01:00:14.979631 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 9 01:00:14.982852 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 9 01:00:14.995061 kernel: BTRFS info (device vda6): first mount of filesystem 7abc21fd-6b75-4be0-8205-dc564a91a608 Oct 9 01:00:14.995139 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 9 01:00:14.995174 kernel: BTRFS info (device vda6): using free space tree Oct 9 01:00:15.004794 kernel: BTRFS info (device vda6): auto enabling async discard Oct 9 01:00:15.017463 kernel: BTRFS info (device vda6): last unmount of filesystem 7abc21fd-6b75-4be0-8205-dc564a91a608 Oct 9 01:00:15.017880 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 9 01:00:15.025816 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 9 01:00:15.033797 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 9 01:00:15.151334 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 9 01:00:15.165830 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 9 01:00:15.202088 systemd-networkd[750]: lo: Link UP Oct 9 01:00:15.202885 systemd-networkd[750]: lo: Gained carrier Oct 9 01:00:15.207142 systemd-networkd[750]: Enumeration completed Oct 9 01:00:15.207294 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 9 01:00:15.209764 ignition[646]: Ignition 2.19.0 Oct 9 01:00:15.209771 ignition[646]: Stage: fetch-offline Oct 9 01:00:15.210816 systemd[1]: Reached target network.target - Network. Oct 9 01:00:15.209809 ignition[646]: no configs at "/usr/lib/ignition/base.d" Oct 9 01:00:15.210917 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Oct 9 01:00:15.209817 ignition[646]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 9 01:00:15.210923 systemd-networkd[750]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Oct 9 01:00:15.210004 ignition[646]: parsed url from cmdline: "" Oct 9 01:00:15.212835 systemd-networkd[750]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 01:00:15.210011 ignition[646]: no config URL provided Oct 9 01:00:15.212841 systemd-networkd[750]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 9 01:00:15.210018 ignition[646]: reading system config file "/usr/lib/ignition/user.ign" Oct 9 01:00:15.214369 systemd-networkd[750]: eth0: Link UP Oct 9 01:00:15.210030 ignition[646]: no config at "/usr/lib/ignition/user.ign" Oct 9 01:00:15.214375 systemd-networkd[750]: eth0: Gained carrier Oct 9 01:00:15.210038 ignition[646]: failed to fetch config: resource requires networking Oct 9 01:00:15.214389 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Oct 9 01:00:15.210412 ignition[646]: Ignition finished successfully Oct 9 01:00:15.215072 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 9 01:00:15.216903 systemd-networkd[750]: eth1: Link UP Oct 9 01:00:15.216908 systemd-networkd[750]: eth1: Gained carrier Oct 9 01:00:15.216947 systemd-networkd[750]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 01:00:15.223666 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Oct 9 01:00:15.231887 systemd-networkd[750]: eth0: DHCPv4 address 165.232.149.110/20, gateway 165.232.144.1 acquired from 169.254.169.253 Oct 9 01:00:15.236590 systemd-networkd[750]: eth1: DHCPv4 address 10.124.0.4/20 acquired from 169.254.169.253 Oct 9 01:00:15.252304 ignition[755]: Ignition 2.19.0 Oct 9 01:00:15.252319 ignition[755]: Stage: fetch Oct 9 01:00:15.252557 ignition[755]: no configs at "/usr/lib/ignition/base.d" Oct 9 01:00:15.252568 ignition[755]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 9 01:00:15.252708 ignition[755]: parsed url from cmdline: "" Oct 9 01:00:15.252718 ignition[755]: no config URL provided Oct 9 01:00:15.252727 ignition[755]: reading system config file "/usr/lib/ignition/user.ign" Oct 9 01:00:15.252740 ignition[755]: no config at "/usr/lib/ignition/user.ign" Oct 9 01:00:15.252772 ignition[755]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Oct 9 01:00:15.269911 ignition[755]: GET result: OK Oct 9 01:00:15.270701 ignition[755]: parsing config with SHA512: cc50f3a0bc65eb89d87d687ab56077f7433a095fb465908e6f4c37f2ae75a964283c0ff1de2a10a2d1f8bf786f885da430703466e610ad4c20df1551ec490c85 Oct 9 01:00:15.277609 unknown[755]: fetched base config from "system" Oct 9 01:00:15.277623 unknown[755]: fetched base config from "system" Oct 9 01:00:15.278056 ignition[755]: fetch: fetch complete Oct 9 01:00:15.277634 unknown[755]: fetched user config from "digitalocean" Oct 9 01:00:15.278063 ignition[755]: fetch: fetch passed Oct 9 01:00:15.282401 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Oct 9 01:00:15.278153 ignition[755]: Ignition finished successfully Oct 9 01:00:15.289763 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 9 01:00:15.326658 ignition[763]: Ignition 2.19.0 Oct 9 01:00:15.326678 ignition[763]: Stage: kargs Oct 9 01:00:15.326994 ignition[763]: no configs at "/usr/lib/ignition/base.d" Oct 9 01:00:15.327016 ignition[763]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 9 01:00:15.329609 ignition[763]: kargs: kargs passed Oct 9 01:00:15.329680 ignition[763]: Ignition finished successfully Oct 9 01:00:15.331059 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 9 01:00:15.337770 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 9 01:00:15.373030 ignition[769]: Ignition 2.19.0 Oct 9 01:00:15.373053 ignition[769]: Stage: disks Oct 9 01:00:15.373728 ignition[769]: no configs at "/usr/lib/ignition/base.d" Oct 9 01:00:15.373766 ignition[769]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 9 01:00:15.375739 ignition[769]: disks: disks passed Oct 9 01:00:15.375828 ignition[769]: Ignition finished successfully Oct 9 01:00:15.377307 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 9 01:00:15.382506 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 9 01:00:15.383644 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 9 01:00:15.384893 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 9 01:00:15.385923 systemd[1]: Reached target sysinit.target - System Initialization. Oct 9 01:00:15.387733 systemd[1]: Reached target basic.target - Basic System. Oct 9 01:00:15.392768 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 9 01:00:15.422474 systemd-fsck[778]: ROOT: clean, 14/553520 files, 52654/553472 blocks Oct 9 01:00:15.429459 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 9 01:00:15.436599 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 9 01:00:15.550487 kernel: EXT4-fs (vda9): mounted filesystem ba2945c1-be14-41c0-8c54-84d676c7a16b r/w with ordered data mode. Quota mode: none. Oct 9 01:00:15.551193 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 9 01:00:15.555226 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 9 01:00:15.561625 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 9 01:00:15.575649 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 9 01:00:15.577648 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Oct 9 01:00:15.587474 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (786) Oct 9 01:00:15.586697 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Oct 9 01:00:15.595821 kernel: BTRFS info (device vda6): first mount of filesystem 7abc21fd-6b75-4be0-8205-dc564a91a608 Oct 9 01:00:15.595867 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 9 01:00:15.595885 kernel: BTRFS info (device vda6): using free space tree Oct 9 01:00:15.589833 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 9 01:00:15.589877 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 9 01:00:15.600752 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 9 01:00:15.612962 kernel: BTRFS info (device vda6): auto enabling async discard Oct 9 01:00:15.609798 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 9 01:00:15.620820 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 9 01:00:15.711479 initrd-setup-root[816]: cut: /sysroot/etc/passwd: No such file or directory Oct 9 01:00:15.718345 coreos-metadata[789]: Oct 09 01:00:15.717 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Oct 9 01:00:15.720494 coreos-metadata[788]: Oct 09 01:00:15.720 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Oct 9 01:00:15.726172 initrd-setup-root[823]: cut: /sysroot/etc/group: No such file or directory Oct 9 01:00:15.731492 coreos-metadata[789]: Oct 09 01:00:15.729 INFO Fetch successful Oct 9 01:00:15.733393 coreos-metadata[788]: Oct 09 01:00:15.732 INFO Fetch successful Oct 9 01:00:15.740012 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Oct 9 01:00:15.740213 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Oct 9 01:00:15.742274 coreos-metadata[789]: Oct 09 01:00:15.742 INFO wrote hostname ci-4116.0.0-d-2a8a4ec573 to /sysroot/etc/hostname Oct 9 01:00:15.743789 initrd-setup-root[830]: cut: /sysroot/etc/shadow: No such file or directory Oct 9 01:00:15.747722 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Oct 9 01:00:15.753386 initrd-setup-root[839]: cut: /sysroot/etc/gshadow: No such file or directory Oct 9 01:00:15.885585 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 9 01:00:15.893634 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 9 01:00:15.897681 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 9 01:00:15.907454 kernel: BTRFS info (device vda6): last unmount of filesystem 7abc21fd-6b75-4be0-8205-dc564a91a608 Oct 9 01:00:15.939564 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 9 01:00:15.946316 ignition[907]: INFO : Ignition 2.19.0 Oct 9 01:00:15.947658 ignition[907]: INFO : Stage: mount Oct 9 01:00:15.947658 ignition[907]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 01:00:15.947658 ignition[907]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 9 01:00:15.949269 ignition[907]: INFO : mount: mount passed Oct 9 01:00:15.949269 ignition[907]: INFO : Ignition finished successfully Oct 9 01:00:15.949714 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 9 01:00:15.956677 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 9 01:00:15.959959 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 9 01:00:15.976698 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 9 01:00:15.996490 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (921) Oct 9 01:00:16.000036 kernel: BTRFS info (device vda6): first mount of filesystem 7abc21fd-6b75-4be0-8205-dc564a91a608 Oct 9 01:00:16.000124 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 9 01:00:16.000146 kernel: BTRFS info (device vda6): using free space tree Oct 9 01:00:16.008482 kernel: BTRFS info (device vda6): auto enabling async discard Oct 9 01:00:16.010979 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 9 01:00:16.038162 ignition[939]: INFO : Ignition 2.19.0 Oct 9 01:00:16.038162 ignition[939]: INFO : Stage: files Oct 9 01:00:16.039903 ignition[939]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 01:00:16.039903 ignition[939]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 9 01:00:16.039903 ignition[939]: DEBUG : files: compiled without relabeling support, skipping Oct 9 01:00:16.042480 ignition[939]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 9 01:00:16.042480 ignition[939]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 9 01:00:16.050345 ignition[939]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 9 01:00:16.051102 ignition[939]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 9 01:00:16.051102 ignition[939]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 9 01:00:16.050986 unknown[939]: wrote ssh authorized keys file for user: core Oct 9 01:00:16.053262 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Oct 9 01:00:16.053262 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Oct 9 01:00:16.090026 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 9 01:00:16.187988 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Oct 9 01:00:16.187988 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 9 01:00:16.190454 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 9 01:00:16.190454 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 9 01:00:16.190454 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 9 01:00:16.190454 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 9 01:00:16.190454 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 9 01:00:16.190454 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 9 01:00:16.190454 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 9 01:00:16.190454 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 9 01:00:16.190454 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 9 01:00:16.190454 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Oct 9 01:00:16.190454 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Oct 9 01:00:16.190454 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Oct 9 01:00:16.190454 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Oct 9 01:00:16.301877 systemd-networkd[750]: eth0: Gained IPv6LL Oct 9 01:00:16.640676 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 9 01:00:16.893554 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Oct 9 01:00:16.893554 ignition[939]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 9 01:00:16.896543 ignition[939]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 9 01:00:16.896543 ignition[939]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 9 01:00:16.896543 ignition[939]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 9 01:00:16.896543 ignition[939]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Oct 9 01:00:16.896543 ignition[939]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Oct 9 01:00:16.896543 ignition[939]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 9 01:00:16.896543 ignition[939]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 9 01:00:16.896543 ignition[939]: INFO : files: files passed Oct 9 01:00:16.896543 ignition[939]: INFO : Ignition finished successfully Oct 9 01:00:16.896815 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 9 01:00:16.904727 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 9 01:00:16.907617 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 9 01:00:16.912740 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 9 01:00:16.913687 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 9 01:00:16.933891 initrd-setup-root-after-ignition[967]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 9 01:00:16.933891 initrd-setup-root-after-ignition[967]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 9 01:00:16.935829 initrd-setup-root-after-ignition[971]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 9 01:00:16.938171 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 9 01:00:16.939838 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 9 01:00:16.946710 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 9 01:00:16.987341 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 9 01:00:16.987512 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 9 01:00:16.988653 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 9 01:00:16.989254 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 9 01:00:16.990328 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 9 01:00:16.995716 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 9 01:00:17.014477 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 9 01:00:17.021767 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 9 01:00:17.034269 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 9 01:00:17.035573 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 01:00:17.036793 systemd[1]: Stopped target timers.target - Timer Units. Oct 9 01:00:17.038026 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 9 01:00:17.038278 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 9 01:00:17.039198 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 9 01:00:17.039725 systemd[1]: Stopped target basic.target - Basic System. Oct 9 01:00:17.041292 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 9 01:00:17.042163 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 9 01:00:17.043234 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 9 01:00:17.044208 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 9 01:00:17.045205 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 9 01:00:17.046352 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 9 01:00:17.047381 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 9 01:00:17.048205 systemd[1]: Stopped target swap.target - Swaps. Oct 9 01:00:17.049017 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 9 01:00:17.049169 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 9 01:00:17.050282 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 9 01:00:17.050991 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 01:00:17.052020 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 9 01:00:17.053498 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 01:00:17.054685 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 9 01:00:17.054827 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 9 01:00:17.056203 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 9 01:00:17.056455 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 9 01:00:17.057390 systemd[1]: ignition-files.service: Deactivated successfully. Oct 9 01:00:17.057634 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 9 01:00:17.058460 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Oct 9 01:00:17.058579 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Oct 9 01:00:17.063860 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 9 01:00:17.067752 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 9 01:00:17.068152 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 9 01:00:17.068320 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 01:00:17.068885 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 9 01:00:17.069037 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 9 01:00:17.075020 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 9 01:00:17.075216 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 9 01:00:17.087057 ignition[991]: INFO : Ignition 2.19.0 Oct 9 01:00:17.088197 ignition[991]: INFO : Stage: umount Oct 9 01:00:17.088843 ignition[991]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 01:00:17.088843 ignition[991]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 9 01:00:17.090552 ignition[991]: INFO : umount: umount passed Oct 9 01:00:17.090963 ignition[991]: INFO : Ignition finished successfully Oct 9 01:00:17.092695 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 9 01:00:17.092869 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 9 01:00:17.094266 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 9 01:00:17.094322 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 9 01:00:17.094765 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 9 01:00:17.094810 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 9 01:00:17.095737 systemd[1]: ignition-fetch.service: Deactivated successfully. Oct 9 01:00:17.095782 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Oct 9 01:00:17.097061 systemd[1]: Stopped target network.target - Network. Oct 9 01:00:17.102372 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 9 01:00:17.104493 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 9 01:00:17.105150 systemd[1]: Stopped target paths.target - Path Units. Oct 9 01:00:17.105826 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 9 01:00:17.109526 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 01:00:17.110026 systemd[1]: Stopped target slices.target - Slice Units. Oct 9 01:00:17.111007 systemd[1]: Stopped target sockets.target - Socket Units. Oct 9 01:00:17.111702 systemd[1]: iscsid.socket: Deactivated successfully. Oct 9 01:00:17.111752 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 9 01:00:17.112390 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 9 01:00:17.112427 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 9 01:00:17.134935 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 9 01:00:17.135037 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 9 01:00:17.135922 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 9 01:00:17.136008 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 9 01:00:17.136977 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 9 01:00:17.138242 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 9 01:00:17.140246 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 9 01:00:17.141853 systemd-networkd[750]: eth1: DHCPv6 lease lost Oct 9 01:00:17.145590 systemd-networkd[750]: eth0: DHCPv6 lease lost Oct 9 01:00:17.170051 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 9 01:00:17.171639 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 9 01:00:17.173931 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 9 01:00:17.174681 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 9 01:00:17.177109 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 9 01:00:17.177656 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 9 01:00:17.180719 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 9 01:00:17.180827 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 9 01:00:17.181821 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 9 01:00:17.181897 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 9 01:00:17.188678 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 9 01:00:17.189762 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 9 01:00:17.190328 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 9 01:00:17.191169 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 9 01:00:17.191233 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 9 01:00:17.191640 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 9 01:00:17.191682 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 9 01:00:17.192083 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 9 01:00:17.192122 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 9 01:00:17.193686 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 01:00:17.211859 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 9 01:00:17.212085 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 01:00:17.213516 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 9 01:00:17.213673 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 9 01:00:17.215890 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 9 01:00:17.215968 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 9 01:00:17.217040 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 9 01:00:17.217096 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 01:00:17.218049 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 9 01:00:17.218174 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 9 01:00:17.219816 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 9 01:00:17.219885 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 9 01:00:17.220840 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 9 01:00:17.220910 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 01:00:17.226771 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 9 01:00:17.227411 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 9 01:00:17.227508 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 01:00:17.228833 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Oct 9 01:00:17.228903 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 9 01:00:17.231298 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 9 01:00:17.231374 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 01:00:17.232346 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 01:00:17.232413 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 01:00:17.248355 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 9 01:00:17.248599 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 9 01:00:17.250367 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 9 01:00:17.262785 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 9 01:00:17.273418 systemd[1]: Switching root. Oct 9 01:00:17.338811 systemd-journald[182]: Journal stopped Oct 9 01:00:18.630176 systemd-journald[182]: Received SIGTERM from PID 1 (systemd). Oct 9 01:00:18.630290 kernel: SELinux: policy capability network_peer_controls=1 Oct 9 01:00:18.630315 kernel: SELinux: policy capability open_perms=1 Oct 9 01:00:18.630346 kernel: SELinux: policy capability extended_socket_class=1 Oct 9 01:00:18.630366 kernel: SELinux: policy capability always_check_network=0 Oct 9 01:00:18.630386 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 9 01:00:18.630406 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 9 01:00:18.633048 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 9 01:00:18.633100 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 9 01:00:18.633134 kernel: audit: type=1403 audit(1728435617.497:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 9 01:00:18.633158 systemd[1]: Successfully loaded SELinux policy in 39.922ms. Oct 9 01:00:18.633188 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 21.440ms. Oct 9 01:00:18.633213 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 9 01:00:18.633236 systemd[1]: Detected virtualization kvm. Oct 9 01:00:18.633257 systemd[1]: Detected architecture x86-64. Oct 9 01:00:18.633286 systemd[1]: Detected first boot. Oct 9 01:00:18.633307 systemd[1]: Hostname set to . Oct 9 01:00:18.633328 systemd[1]: Initializing machine ID from VM UUID. Oct 9 01:00:18.633349 zram_generator::config[1038]: No configuration found. Oct 9 01:00:18.633371 systemd[1]: Populated /etc with preset unit settings. Oct 9 01:00:18.633389 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 9 01:00:18.633410 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 9 01:00:18.634005 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 9 01:00:18.634070 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 9 01:00:18.634115 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 9 01:00:18.634139 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 9 01:00:18.634161 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 9 01:00:18.634183 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 9 01:00:18.634205 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 9 01:00:18.634227 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 9 01:00:18.634248 systemd[1]: Created slice user.slice - User and Session Slice. Oct 9 01:00:18.634269 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 01:00:18.634297 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 01:00:18.634320 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 9 01:00:18.634342 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 9 01:00:18.634364 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 9 01:00:18.634386 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 9 01:00:18.634407 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 9 01:00:18.634428 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 01:00:18.637518 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 9 01:00:18.637573 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 9 01:00:18.637596 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 9 01:00:18.637619 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 9 01:00:18.637642 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 01:00:18.637664 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 9 01:00:18.637686 systemd[1]: Reached target slices.target - Slice Units. Oct 9 01:00:18.637708 systemd[1]: Reached target swap.target - Swaps. Oct 9 01:00:18.637730 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 9 01:00:18.637752 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 9 01:00:18.637775 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 9 01:00:18.637797 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 9 01:00:18.637819 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 01:00:18.637839 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 9 01:00:18.637861 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 9 01:00:18.637883 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 9 01:00:18.637905 systemd[1]: Mounting media.mount - External Media Directory... Oct 9 01:00:18.637926 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 01:00:18.637951 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 9 01:00:18.637972 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 9 01:00:18.637994 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 9 01:00:18.638018 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 9 01:00:18.638039 systemd[1]: Reached target machines.target - Containers. Oct 9 01:00:18.638073 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 9 01:00:18.638097 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 01:00:18.638120 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 9 01:00:18.638146 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 9 01:00:18.638168 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 01:00:18.638190 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 9 01:00:18.638213 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 01:00:18.638237 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 9 01:00:18.638259 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 01:00:18.638281 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 9 01:00:18.638302 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 9 01:00:18.638328 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 9 01:00:18.638351 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 9 01:00:18.638372 systemd[1]: Stopped systemd-fsck-usr.service. Oct 9 01:00:18.638394 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 9 01:00:18.638415 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 9 01:00:18.643516 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 9 01:00:18.643568 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 9 01:00:18.643593 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 9 01:00:18.643617 systemd[1]: verity-setup.service: Deactivated successfully. Oct 9 01:00:18.643648 systemd[1]: Stopped verity-setup.service. Oct 9 01:00:18.643670 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 01:00:18.643693 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 9 01:00:18.643715 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 9 01:00:18.643737 systemd[1]: Mounted media.mount - External Media Directory. Oct 9 01:00:18.643758 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 9 01:00:18.643783 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 9 01:00:18.643817 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 9 01:00:18.643839 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 01:00:18.643865 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 9 01:00:18.643891 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 9 01:00:18.643913 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 01:00:18.643934 kernel: fuse: init (API version 7.39) Oct 9 01:00:18.643955 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 01:00:18.643977 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 01:00:18.644043 systemd-journald[1107]: Collecting audit messages is disabled. Oct 9 01:00:18.644084 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 01:00:18.644108 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 9 01:00:18.644135 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 9 01:00:18.644163 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 9 01:00:18.644185 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 9 01:00:18.644207 kernel: loop: module loaded Oct 9 01:00:18.644228 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 9 01:00:18.644250 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 01:00:18.644273 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 01:00:18.644291 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 9 01:00:18.644314 systemd-journald[1107]: Journal started Oct 9 01:00:18.644369 systemd-journald[1107]: Runtime Journal (/run/log/journal/d9e99ab95ae04f5a997256ace3dacd89) is 4.9M, max 39.3M, 34.4M free. Oct 9 01:00:18.654532 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 9 01:00:18.251404 systemd[1]: Queued start job for default target multi-user.target. Oct 9 01:00:18.273342 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 9 01:00:18.274013 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 9 01:00:18.665088 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 9 01:00:18.665173 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 9 01:00:18.675531 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 9 01:00:18.675644 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Oct 9 01:00:18.693593 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 9 01:00:18.697982 kernel: ACPI: bus type drm_connector registered Oct 9 01:00:18.705725 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 9 01:00:18.707418 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 01:00:18.719526 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 9 01:00:18.725282 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 9 01:00:18.733146 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 9 01:00:18.738473 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 9 01:00:18.747494 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 9 01:00:18.759527 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 9 01:00:18.774197 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 9 01:00:18.780555 systemd[1]: Started systemd-journald.service - Journal Service. Oct 9 01:00:18.784573 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 9 01:00:18.785968 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 9 01:00:18.787543 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 9 01:00:18.788508 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 9 01:00:18.789366 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 9 01:00:18.797950 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 9 01:00:18.863713 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 9 01:00:18.880384 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 9 01:00:18.894727 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 9 01:00:18.898912 kernel: loop0: detected capacity change from 0 to 205544 Oct 9 01:00:18.903671 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Oct 9 01:00:18.944470 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 9 01:00:18.974139 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 9 01:00:18.990278 systemd-journald[1107]: Time spent on flushing to /var/log/journal/d9e99ab95ae04f5a997256ace3dacd89 is 64.938ms for 994 entries. Oct 9 01:00:18.990278 systemd-journald[1107]: System Journal (/var/log/journal/d9e99ab95ae04f5a997256ace3dacd89) is 8.0M, max 195.6M, 187.6M free. Oct 9 01:00:19.074730 systemd-journald[1107]: Received client request to flush runtime journal. Oct 9 01:00:19.074830 kernel: loop1: detected capacity change from 0 to 8 Oct 9 01:00:19.025363 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 9 01:00:19.027346 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Oct 9 01:00:19.081469 kernel: loop2: detected capacity change from 0 to 138192 Oct 9 01:00:19.084558 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 9 01:00:19.094973 systemd-tmpfiles[1137]: ACLs are not supported, ignoring. Oct 9 01:00:19.095004 systemd-tmpfiles[1137]: ACLs are not supported, ignoring. Oct 9 01:00:19.108371 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 01:00:19.117657 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Oct 9 01:00:19.137792 udevadm[1173]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Oct 9 01:00:19.143761 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 9 01:00:19.160194 kernel: loop3: detected capacity change from 0 to 140992 Oct 9 01:00:19.153710 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 9 01:00:19.212317 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 9 01:00:19.215484 kernel: loop4: detected capacity change from 0 to 205544 Oct 9 01:00:19.228687 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 9 01:00:19.243565 kernel: loop5: detected capacity change from 0 to 8 Oct 9 01:00:19.257689 kernel: loop6: detected capacity change from 0 to 138192 Oct 9 01:00:19.297118 systemd-tmpfiles[1181]: ACLs are not supported, ignoring. Oct 9 01:00:19.297141 systemd-tmpfiles[1181]: ACLs are not supported, ignoring. Oct 9 01:00:19.310470 kernel: loop7: detected capacity change from 0 to 140992 Oct 9 01:00:19.328488 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 01:00:19.352513 (sd-merge)[1180]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Oct 9 01:00:19.358977 (sd-merge)[1180]: Merged extensions into '/usr'. Oct 9 01:00:19.373747 systemd[1]: Reloading requested from client PID 1136 ('systemd-sysext') (unit systemd-sysext.service)... Oct 9 01:00:19.373768 systemd[1]: Reloading... Oct 9 01:00:19.524464 zram_generator::config[1209]: No configuration found. Oct 9 01:00:19.542374 ldconfig[1132]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 9 01:00:19.711367 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 01:00:19.771873 systemd[1]: Reloading finished in 397 ms. Oct 9 01:00:19.796668 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 9 01:00:19.800922 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 9 01:00:19.815659 systemd[1]: Starting ensure-sysext.service... Oct 9 01:00:19.818634 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 9 01:00:19.828466 systemd[1]: Reloading requested from client PID 1252 ('systemctl') (unit ensure-sysext.service)... Oct 9 01:00:19.828482 systemd[1]: Reloading... Oct 9 01:00:19.874791 systemd-tmpfiles[1253]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 9 01:00:19.875139 systemd-tmpfiles[1253]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 9 01:00:19.876274 systemd-tmpfiles[1253]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 9 01:00:19.878770 systemd-tmpfiles[1253]: ACLs are not supported, ignoring. Oct 9 01:00:19.880707 systemd-tmpfiles[1253]: ACLs are not supported, ignoring. Oct 9 01:00:19.887161 systemd-tmpfiles[1253]: Detected autofs mount point /boot during canonicalization of boot. Oct 9 01:00:19.887331 systemd-tmpfiles[1253]: Skipping /boot Oct 9 01:00:19.916776 systemd-tmpfiles[1253]: Detected autofs mount point /boot during canonicalization of boot. Oct 9 01:00:19.916952 systemd-tmpfiles[1253]: Skipping /boot Oct 9 01:00:19.936460 zram_generator::config[1275]: No configuration found. Oct 9 01:00:20.121972 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 01:00:20.183712 systemd[1]: Reloading finished in 354 ms. Oct 9 01:00:20.204576 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 9 01:00:20.205980 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 9 01:00:20.231737 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 9 01:00:20.247841 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 9 01:00:20.251677 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 9 01:00:20.255729 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 9 01:00:20.268657 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 01:00:20.279708 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 9 01:00:20.293784 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 9 01:00:20.297919 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 01:00:20.298253 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 01:00:20.305932 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 01:00:20.311852 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 01:00:20.315480 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 01:00:20.316067 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 01:00:20.316197 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 01:00:20.320659 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 01:00:20.320862 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 01:00:20.321031 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 01:00:20.321150 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 01:00:20.325111 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 01:00:20.325500 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 01:00:20.334019 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 9 01:00:20.334783 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 01:00:20.335012 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 01:00:20.340544 systemd[1]: Finished ensure-sysext.service. Oct 9 01:00:20.352662 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 9 01:00:20.366294 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 9 01:00:20.374535 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 9 01:00:20.376426 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 9 01:00:20.379285 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 01:00:20.382582 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 01:00:20.386251 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 01:00:20.386441 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 01:00:20.387959 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 9 01:00:20.397891 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 01:00:20.398194 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 01:00:20.399599 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 9 01:00:20.400386 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 9 01:00:20.401072 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 9 01:00:20.403386 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 9 01:00:20.422070 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 9 01:00:20.422811 augenrules[1366]: No rules Oct 9 01:00:20.426602 systemd[1]: audit-rules.service: Deactivated successfully. Oct 9 01:00:20.427538 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 9 01:00:20.434798 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 9 01:00:20.444340 systemd-udevd[1328]: Using default interface naming scheme 'v255'. Oct 9 01:00:20.486592 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 9 01:00:20.505581 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 01:00:20.515691 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 9 01:00:20.536785 systemd-resolved[1327]: Positive Trust Anchors: Oct 9 01:00:20.541537 systemd-resolved[1327]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 9 01:00:20.541584 systemd-resolved[1327]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 9 01:00:20.542542 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 9 01:00:20.543172 systemd[1]: Reached target time-set.target - System Time Set. Oct 9 01:00:20.555163 systemd-resolved[1327]: Using system hostname 'ci-4116.0.0-d-2a8a4ec573'. Oct 9 01:00:20.560343 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 9 01:00:20.560984 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 9 01:00:20.643169 systemd-networkd[1378]: lo: Link UP Oct 9 01:00:20.643183 systemd-networkd[1378]: lo: Gained carrier Oct 9 01:00:20.645502 systemd-networkd[1378]: Enumeration completed Oct 9 01:00:20.645631 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 9 01:00:20.646599 systemd[1]: Reached target network.target - Network. Oct 9 01:00:20.653742 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 9 01:00:20.664733 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 9 01:00:20.688647 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Oct 9 01:00:20.689116 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 01:00:20.689875 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1393) Oct 9 01:00:20.689270 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 01:00:20.694638 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 01:00:20.697697 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 01:00:20.701671 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 01:00:20.702334 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 01:00:20.702391 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 9 01:00:20.702420 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 01:00:20.711485 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1393) Oct 9 01:00:20.714128 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 01:00:20.714378 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 01:00:20.732639 kernel: ISO 9660 Extensions: RRIP_1991A Oct 9 01:00:20.736870 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Oct 9 01:00:20.737731 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 01:00:20.738528 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 01:00:20.741374 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 01:00:20.741573 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 01:00:20.746366 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 9 01:00:20.746528 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 9 01:00:20.751474 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1381) Oct 9 01:00:20.805809 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Oct 9 01:00:20.807453 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Oct 9 01:00:20.813253 systemd-networkd[1378]: eth0: Configuring with /run/systemd/network/10-6a:1b:f9:1d:fb:b4.network. Oct 9 01:00:20.815335 systemd-networkd[1378]: eth0: Link UP Oct 9 01:00:20.815346 systemd-networkd[1378]: eth0: Gained carrier Oct 9 01:00:20.822188 systemd-networkd[1378]: eth1: Configuring with /run/systemd/network/10-06:cd:0b:cc:70:c4.network. Oct 9 01:00:20.824084 systemd-networkd[1378]: eth1: Link UP Oct 9 01:00:20.824096 systemd-networkd[1378]: eth1: Gained carrier Oct 9 01:00:20.828357 systemd-timesyncd[1344]: Network configuration changed, trying to establish connection. Oct 9 01:00:20.830609 systemd-timesyncd[1344]: Network configuration changed, trying to establish connection. Oct 9 01:00:20.845484 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Oct 9 01:00:20.845572 kernel: ACPI: button: Power Button [PWRF] Oct 9 01:00:20.881312 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 9 01:00:20.891667 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 9 01:00:20.920382 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Oct 9 01:00:20.933952 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 01:00:20.939473 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Oct 9 01:00:20.946479 kernel: mousedev: PS/2 mouse device common for all mice Oct 9 01:00:20.950505 kernel: Console: switching to colour dummy device 80x25 Oct 9 01:00:20.957876 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Oct 9 01:00:20.958102 kernel: [drm] features: -context_init Oct 9 01:00:20.960882 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 9 01:00:20.970609 kernel: [drm] number of scanouts: 1 Oct 9 01:00:20.970695 kernel: [drm] number of cap sets: 0 Oct 9 01:00:20.980467 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Oct 9 01:00:21.003498 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Oct 9 01:00:21.003579 kernel: Console: switching to colour frame buffer device 128x48 Oct 9 01:00:21.011465 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Oct 9 01:00:21.015608 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 01:00:21.015924 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 01:00:21.034817 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 01:00:21.051203 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 01:00:21.061677 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 01:00:21.111470 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 01:00:21.168164 kernel: EDAC MC: Ver: 3.0.0 Oct 9 01:00:21.196117 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Oct 9 01:00:21.202819 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Oct 9 01:00:21.228846 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 01:00:21.238309 lvm[1436]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 9 01:00:21.263303 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Oct 9 01:00:21.265711 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 9 01:00:21.265888 systemd[1]: Reached target sysinit.target - System Initialization. Oct 9 01:00:21.266298 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 9 01:00:21.266519 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 9 01:00:21.266922 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 9 01:00:21.267227 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 9 01:00:21.267359 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 9 01:00:21.267747 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 9 01:00:21.267807 systemd[1]: Reached target paths.target - Path Units. Oct 9 01:00:21.268550 systemd[1]: Reached target timers.target - Timer Units. Oct 9 01:00:21.271218 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 9 01:00:21.273719 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 9 01:00:21.282470 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 9 01:00:21.292766 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Oct 9 01:00:21.296959 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 9 01:00:21.300049 systemd[1]: Reached target sockets.target - Socket Units. Oct 9 01:00:21.300729 lvm[1443]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 9 01:00:21.300741 systemd[1]: Reached target basic.target - Basic System. Oct 9 01:00:21.301309 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 9 01:00:21.301356 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 9 01:00:21.309756 systemd[1]: Starting containerd.service - containerd container runtime... Oct 9 01:00:21.324636 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Oct 9 01:00:21.329859 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 9 01:00:21.333425 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 9 01:00:21.338680 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 9 01:00:21.342302 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 9 01:00:21.350777 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 9 01:00:21.354363 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 9 01:00:21.363701 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 9 01:00:21.374543 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 9 01:00:21.388634 jq[1449]: false Oct 9 01:00:21.390178 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 9 01:00:21.389694 dbus-daemon[1446]: [system] SELinux support is enabled Oct 9 01:00:21.391306 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 9 01:00:21.393610 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 9 01:00:21.416009 extend-filesystems[1450]: Found loop4 Oct 9 01:00:21.416009 extend-filesystems[1450]: Found loop5 Oct 9 01:00:21.416009 extend-filesystems[1450]: Found loop6 Oct 9 01:00:21.416009 extend-filesystems[1450]: Found loop7 Oct 9 01:00:21.416009 extend-filesystems[1450]: Found vda Oct 9 01:00:21.416009 extend-filesystems[1450]: Found vda1 Oct 9 01:00:21.416009 extend-filesystems[1450]: Found vda2 Oct 9 01:00:21.416009 extend-filesystems[1450]: Found vda3 Oct 9 01:00:21.416009 extend-filesystems[1450]: Found usr Oct 9 01:00:21.416009 extend-filesystems[1450]: Found vda4 Oct 9 01:00:21.416009 extend-filesystems[1450]: Found vda6 Oct 9 01:00:21.416009 extend-filesystems[1450]: Found vda7 Oct 9 01:00:21.416009 extend-filesystems[1450]: Found vda9 Oct 9 01:00:21.416009 extend-filesystems[1450]: Checking size of /dev/vda9 Oct 9 01:00:21.396671 systemd[1]: Starting update-engine.service - Update Engine... Oct 9 01:00:21.475617 coreos-metadata[1445]: Oct 09 01:00:21.427 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Oct 9 01:00:21.475617 coreos-metadata[1445]: Oct 09 01:00:21.445 INFO Fetch successful Oct 9 01:00:21.402700 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 9 01:00:21.407733 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 9 01:00:21.412854 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Oct 9 01:00:21.423286 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 9 01:00:21.423993 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 9 01:00:21.427272 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 9 01:00:21.428383 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 9 01:00:21.474469 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 9 01:00:21.476813 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 9 01:00:21.476858 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 9 01:00:21.483541 extend-filesystems[1450]: Resized partition /dev/vda9 Oct 9 01:00:21.478002 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 9 01:00:21.478968 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Oct 9 01:00:21.479014 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 9 01:00:21.503471 extend-filesystems[1470]: resize2fs 1.47.1 (20-May-2024) Oct 9 01:00:21.529789 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Oct 9 01:00:21.529863 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1391) Oct 9 01:00:21.529931 jq[1458]: true Oct 9 01:00:21.548474 tar[1461]: linux-amd64/helm Oct 9 01:00:21.554597 (ntainerd)[1474]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 9 01:00:21.564995 update_engine[1457]: I20241009 01:00:21.557401 1457 main.cc:92] Flatcar Update Engine starting Oct 9 01:00:21.568155 systemd[1]: Started update-engine.service - Update Engine. Oct 9 01:00:21.575642 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 9 01:00:21.579509 update_engine[1457]: I20241009 01:00:21.577572 1457 update_check_scheduler.cc:74] Next update check in 2m55s Oct 9 01:00:21.609234 systemd[1]: motdgen.service: Deactivated successfully. Oct 9 01:00:21.611594 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 9 01:00:21.623323 jq[1480]: true Oct 9 01:00:21.661595 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Oct 9 01:00:21.666636 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 9 01:00:21.728342 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Oct 9 01:00:21.749700 extend-filesystems[1470]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 9 01:00:21.749700 extend-filesystems[1470]: old_desc_blocks = 1, new_desc_blocks = 8 Oct 9 01:00:21.749700 extend-filesystems[1470]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Oct 9 01:00:21.762866 extend-filesystems[1450]: Resized filesystem in /dev/vda9 Oct 9 01:00:21.762866 extend-filesystems[1450]: Found vdb Oct 9 01:00:21.772823 bash[1508]: Updated "/home/core/.ssh/authorized_keys" Oct 9 01:00:21.784192 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 9 01:00:21.784881 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 9 01:00:21.787448 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 9 01:00:21.803877 systemd-logind[1456]: New seat seat0. Oct 9 01:00:21.814722 systemd[1]: Starting sshkeys.service... Oct 9 01:00:21.831890 systemd-logind[1456]: Watching system buttons on /dev/input/event1 (Power Button) Oct 9 01:00:21.832963 systemd-logind[1456]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 9 01:00:21.833484 systemd[1]: Started systemd-logind.service - User Login Management. Oct 9 01:00:21.890941 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Oct 9 01:00:21.908136 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Oct 9 01:00:21.934536 systemd-networkd[1378]: eth1: Gained IPv6LL Oct 9 01:00:21.934995 systemd-timesyncd[1344]: Network configuration changed, trying to establish connection. Oct 9 01:00:21.940814 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 9 01:00:21.952030 systemd[1]: Reached target network-online.target - Network is Online. Oct 9 01:00:21.967862 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:00:21.980320 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 9 01:00:22.081283 coreos-metadata[1520]: Oct 09 01:00:22.081 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Oct 9 01:00:22.083502 locksmithd[1488]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 9 01:00:22.099895 coreos-metadata[1520]: Oct 09 01:00:22.099 INFO Fetch successful Oct 9 01:00:22.109601 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 9 01:00:22.129630 unknown[1520]: wrote ssh authorized keys file for user: core Oct 9 01:00:22.193870 update-ssh-keys[1537]: Updated "/home/core/.ssh/authorized_keys" Oct 9 01:00:22.196607 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Oct 9 01:00:22.205460 containerd[1474]: time="2024-10-09T01:00:22.203631815Z" level=info msg="starting containerd" revision=b2ce781edcbd6cb758f172ecab61c79d607cc41d version=v1.7.22 Oct 9 01:00:22.209954 systemd[1]: Finished sshkeys.service. Oct 9 01:00:22.254586 systemd-networkd[1378]: eth0: Gained IPv6LL Oct 9 01:00:22.255039 systemd-timesyncd[1344]: Network configuration changed, trying to establish connection. Oct 9 01:00:22.267590 sshd_keygen[1487]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 9 01:00:22.269532 containerd[1474]: time="2024-10-09T01:00:22.269026220Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 9 01:00:22.273581 containerd[1474]: time="2024-10-09T01:00:22.273504773Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.54-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 9 01:00:22.273581 containerd[1474]: time="2024-10-09T01:00:22.273553803Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 9 01:00:22.273581 containerd[1474]: time="2024-10-09T01:00:22.273580253Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 9 01:00:22.274311 containerd[1474]: time="2024-10-09T01:00:22.273805148Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Oct 9 01:00:22.274311 containerd[1474]: time="2024-10-09T01:00:22.273830939Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Oct 9 01:00:22.274311 containerd[1474]: time="2024-10-09T01:00:22.273890889Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 01:00:22.274311 containerd[1474]: time="2024-10-09T01:00:22.273903711Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 9 01:00:22.274311 containerd[1474]: time="2024-10-09T01:00:22.274223340Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 01:00:22.274311 containerd[1474]: time="2024-10-09T01:00:22.274250463Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 9 01:00:22.274311 containerd[1474]: time="2024-10-09T01:00:22.274271178Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 01:00:22.274311 containerd[1474]: time="2024-10-09T01:00:22.274284787Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 9 01:00:22.278680 containerd[1474]: time="2024-10-09T01:00:22.274428390Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 9 01:00:22.278680 containerd[1474]: time="2024-10-09T01:00:22.277498204Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 9 01:00:22.278680 containerd[1474]: time="2024-10-09T01:00:22.277698836Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 01:00:22.278680 containerd[1474]: time="2024-10-09T01:00:22.277717266Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 9 01:00:22.278680 containerd[1474]: time="2024-10-09T01:00:22.277838986Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 9 01:00:22.278680 containerd[1474]: time="2024-10-09T01:00:22.277896147Z" level=info msg="metadata content store policy set" policy=shared Oct 9 01:00:22.304469 containerd[1474]: time="2024-10-09T01:00:22.302909042Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 9 01:00:22.304469 containerd[1474]: time="2024-10-09T01:00:22.303015804Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 9 01:00:22.304469 containerd[1474]: time="2024-10-09T01:00:22.303048996Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Oct 9 01:00:22.304469 containerd[1474]: time="2024-10-09T01:00:22.303075761Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Oct 9 01:00:22.304469 containerd[1474]: time="2024-10-09T01:00:22.303097264Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 9 01:00:22.304469 containerd[1474]: time="2024-10-09T01:00:22.303343433Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 9 01:00:22.304469 containerd[1474]: time="2024-10-09T01:00:22.303687467Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 9 01:00:22.304469 containerd[1474]: time="2024-10-09T01:00:22.303879016Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Oct 9 01:00:22.304469 containerd[1474]: time="2024-10-09T01:00:22.303907001Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Oct 9 01:00:22.304469 containerd[1474]: time="2024-10-09T01:00:22.303934254Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Oct 9 01:00:22.304469 containerd[1474]: time="2024-10-09T01:00:22.303969546Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 9 01:00:22.304469 containerd[1474]: time="2024-10-09T01:00:22.303993346Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 9 01:00:22.304469 containerd[1474]: time="2024-10-09T01:00:22.304012843Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 9 01:00:22.304469 containerd[1474]: time="2024-10-09T01:00:22.304033991Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 9 01:00:22.305013 containerd[1474]: time="2024-10-09T01:00:22.304055635Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 9 01:00:22.305013 containerd[1474]: time="2024-10-09T01:00:22.304074021Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 9 01:00:22.305013 containerd[1474]: time="2024-10-09T01:00:22.304092935Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 9 01:00:22.305013 containerd[1474]: time="2024-10-09T01:00:22.304111437Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 9 01:00:22.305013 containerd[1474]: time="2024-10-09T01:00:22.304140776Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 9 01:00:22.305013 containerd[1474]: time="2024-10-09T01:00:22.304160650Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 9 01:00:22.305013 containerd[1474]: time="2024-10-09T01:00:22.304200570Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 9 01:00:22.305013 containerd[1474]: time="2024-10-09T01:00:22.304221093Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 9 01:00:22.305013 containerd[1474]: time="2024-10-09T01:00:22.304238092Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 9 01:00:22.305013 containerd[1474]: time="2024-10-09T01:00:22.304255963Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 9 01:00:22.305013 containerd[1474]: time="2024-10-09T01:00:22.304274890Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 9 01:00:22.305013 containerd[1474]: time="2024-10-09T01:00:22.304302485Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 9 01:00:22.305013 containerd[1474]: time="2024-10-09T01:00:22.304333728Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Oct 9 01:00:22.305013 containerd[1474]: time="2024-10-09T01:00:22.304363837Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Oct 9 01:00:22.305494 containerd[1474]: time="2024-10-09T01:00:22.304381919Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 9 01:00:22.305494 containerd[1474]: time="2024-10-09T01:00:22.304402005Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Oct 9 01:00:22.305494 containerd[1474]: time="2024-10-09T01:00:22.304420612Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 9 01:00:22.306685 containerd[1474]: time="2024-10-09T01:00:22.305688571Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Oct 9 01:00:22.306685 containerd[1474]: time="2024-10-09T01:00:22.305743464Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Oct 9 01:00:22.306685 containerd[1474]: time="2024-10-09T01:00:22.305767059Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 9 01:00:22.306685 containerd[1474]: time="2024-10-09T01:00:22.305788147Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 9 01:00:22.306685 containerd[1474]: time="2024-10-09T01:00:22.305872969Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 9 01:00:22.306685 containerd[1474]: time="2024-10-09T01:00:22.305901534Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Oct 9 01:00:22.306685 containerd[1474]: time="2024-10-09T01:00:22.305986520Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 9 01:00:22.306685 containerd[1474]: time="2024-10-09T01:00:22.306009146Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Oct 9 01:00:22.306685 containerd[1474]: time="2024-10-09T01:00:22.306023960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 9 01:00:22.306685 containerd[1474]: time="2024-10-09T01:00:22.306060170Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Oct 9 01:00:22.306685 containerd[1474]: time="2024-10-09T01:00:22.306077026Z" level=info msg="NRI interface is disabled by configuration." Oct 9 01:00:22.306685 containerd[1474]: time="2024-10-09T01:00:22.306102913Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 9 01:00:22.319406 containerd[1474]: time="2024-10-09T01:00:22.317644126Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 9 01:00:22.319406 containerd[1474]: time="2024-10-09T01:00:22.317755225Z" level=info msg="Connect containerd service" Oct 9 01:00:22.319406 containerd[1474]: time="2024-10-09T01:00:22.317831587Z" level=info msg="using legacy CRI server" Oct 9 01:00:22.319406 containerd[1474]: time="2024-10-09T01:00:22.317845017Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 9 01:00:22.319406 containerd[1474]: time="2024-10-09T01:00:22.318018710Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 9 01:00:22.319406 containerd[1474]: time="2024-10-09T01:00:22.319066206Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 9 01:00:22.326171 containerd[1474]: time="2024-10-09T01:00:22.323880660Z" level=info msg="Start subscribing containerd event" Oct 9 01:00:22.326171 containerd[1474]: time="2024-10-09T01:00:22.323977734Z" level=info msg="Start recovering state" Oct 9 01:00:22.326171 containerd[1474]: time="2024-10-09T01:00:22.324093009Z" level=info msg="Start event monitor" Oct 9 01:00:22.326171 containerd[1474]: time="2024-10-09T01:00:22.324128473Z" level=info msg="Start snapshots syncer" Oct 9 01:00:22.326171 containerd[1474]: time="2024-10-09T01:00:22.324139984Z" level=info msg="Start cni network conf syncer for default" Oct 9 01:00:22.326171 containerd[1474]: time="2024-10-09T01:00:22.324146847Z" level=info msg="Start streaming server" Oct 9 01:00:22.326171 containerd[1474]: time="2024-10-09T01:00:22.324539392Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 9 01:00:22.326171 containerd[1474]: time="2024-10-09T01:00:22.324604685Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 9 01:00:22.326171 containerd[1474]: time="2024-10-09T01:00:22.324668003Z" level=info msg="containerd successfully booted in 0.122054s" Oct 9 01:00:22.325133 systemd[1]: Started containerd.service - containerd container runtime. Oct 9 01:00:22.353038 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 9 01:00:22.369117 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 9 01:00:22.375830 systemd[1]: Started sshd@0-165.232.149.110:22-139.178.68.195:52564.service - OpenSSH per-connection server daemon (139.178.68.195:52564). Oct 9 01:00:22.400056 systemd[1]: issuegen.service: Deactivated successfully. Oct 9 01:00:22.401155 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 9 01:00:22.416317 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 9 01:00:22.461912 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 9 01:00:22.471829 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 9 01:00:22.483867 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 9 01:00:22.485554 systemd[1]: Reached target getty.target - Login Prompts. Oct 9 01:00:22.535699 sshd[1553]: Accepted publickey for core from 139.178.68.195 port 52564 ssh2: RSA SHA256:rAraWF6dAhtbVQzAuCRwvYKxEoENakeAe95MuXIlOkk Oct 9 01:00:22.539574 sshd[1553]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:00:22.557449 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 9 01:00:22.567780 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 9 01:00:22.573955 systemd-logind[1456]: New session 1 of user core. Oct 9 01:00:22.612941 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 9 01:00:22.628044 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 9 01:00:22.651563 (systemd)[1565]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 9 01:00:22.865637 systemd[1565]: Queued start job for default target default.target. Oct 9 01:00:22.872403 systemd[1565]: Created slice app.slice - User Application Slice. Oct 9 01:00:22.872450 systemd[1565]: Reached target paths.target - Paths. Oct 9 01:00:22.872466 systemd[1565]: Reached target timers.target - Timers. Oct 9 01:00:22.887596 systemd[1565]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 9 01:00:22.900719 systemd[1565]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 9 01:00:22.901371 systemd[1565]: Reached target sockets.target - Sockets. Oct 9 01:00:22.901389 systemd[1565]: Reached target basic.target - Basic System. Oct 9 01:00:22.901561 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 9 01:00:22.904326 systemd[1565]: Reached target default.target - Main User Target. Oct 9 01:00:22.904418 systemd[1565]: Startup finished in 235ms. Oct 9 01:00:22.911726 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 9 01:00:22.978259 tar[1461]: linux-amd64/LICENSE Oct 9 01:00:22.978259 tar[1461]: linux-amd64/README.md Oct 9 01:00:23.009845 systemd[1]: Started sshd@1-165.232.149.110:22-139.178.68.195:52578.service - OpenSSH per-connection server daemon (139.178.68.195:52578). Oct 9 01:00:23.028670 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 9 01:00:23.091095 sshd[1578]: Accepted publickey for core from 139.178.68.195 port 52578 ssh2: RSA SHA256:rAraWF6dAhtbVQzAuCRwvYKxEoENakeAe95MuXIlOkk Oct 9 01:00:23.092846 sshd[1578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:00:23.099100 systemd-logind[1456]: New session 2 of user core. Oct 9 01:00:23.102661 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 9 01:00:23.171819 sshd[1578]: pam_unix(sshd:session): session closed for user core Oct 9 01:00:23.179672 systemd[1]: sshd@1-165.232.149.110:22-139.178.68.195:52578.service: Deactivated successfully. Oct 9 01:00:23.182019 systemd[1]: session-2.scope: Deactivated successfully. Oct 9 01:00:23.185647 systemd-logind[1456]: Session 2 logged out. Waiting for processes to exit. Oct 9 01:00:23.191873 systemd[1]: Started sshd@2-165.232.149.110:22-139.178.68.195:52594.service - OpenSSH per-connection server daemon (139.178.68.195:52594). Oct 9 01:00:23.197898 systemd-logind[1456]: Removed session 2. Oct 9 01:00:23.243465 sshd[1586]: Accepted publickey for core from 139.178.68.195 port 52594 ssh2: RSA SHA256:rAraWF6dAhtbVQzAuCRwvYKxEoENakeAe95MuXIlOkk Oct 9 01:00:23.245313 sshd[1586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:00:23.253773 systemd-logind[1456]: New session 3 of user core. Oct 9 01:00:23.263708 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 9 01:00:23.332034 sshd[1586]: pam_unix(sshd:session): session closed for user core Oct 9 01:00:23.336485 systemd[1]: sshd@2-165.232.149.110:22-139.178.68.195:52594.service: Deactivated successfully. Oct 9 01:00:23.339395 systemd[1]: session-3.scope: Deactivated successfully. Oct 9 01:00:23.342192 systemd-logind[1456]: Session 3 logged out. Waiting for processes to exit. Oct 9 01:00:23.343924 systemd-logind[1456]: Removed session 3. Oct 9 01:00:23.609645 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:00:23.613040 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 9 01:00:23.615983 systemd[1]: Startup finished in 1.372s (kernel) + 5.761s (initrd) + 6.157s (userspace) = 13.291s. Oct 9 01:00:23.622185 (kubelet)[1597]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 01:00:24.383956 kubelet[1597]: E1009 01:00:24.383790 1597 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 01:00:24.385831 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 01:00:24.386016 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 01:00:24.386576 systemd[1]: kubelet.service: Consumed 1.267s CPU time. Oct 9 01:00:33.359012 systemd[1]: Started sshd@3-165.232.149.110:22-139.178.68.195:35628.service - OpenSSH per-connection server daemon (139.178.68.195:35628). Oct 9 01:00:33.402532 sshd[1609]: Accepted publickey for core from 139.178.68.195 port 35628 ssh2: RSA SHA256:rAraWF6dAhtbVQzAuCRwvYKxEoENakeAe95MuXIlOkk Oct 9 01:00:33.404095 sshd[1609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:00:33.410888 systemd-logind[1456]: New session 4 of user core. Oct 9 01:00:33.419751 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 9 01:00:33.482560 sshd[1609]: pam_unix(sshd:session): session closed for user core Oct 9 01:00:33.494356 systemd[1]: sshd@3-165.232.149.110:22-139.178.68.195:35628.service: Deactivated successfully. Oct 9 01:00:33.496886 systemd[1]: session-4.scope: Deactivated successfully. Oct 9 01:00:33.499677 systemd-logind[1456]: Session 4 logged out. Waiting for processes to exit. Oct 9 01:00:33.505778 systemd[1]: Started sshd@4-165.232.149.110:22-139.178.68.195:35644.service - OpenSSH per-connection server daemon (139.178.68.195:35644). Oct 9 01:00:33.507319 systemd-logind[1456]: Removed session 4. Oct 9 01:00:33.548777 sshd[1616]: Accepted publickey for core from 139.178.68.195 port 35644 ssh2: RSA SHA256:rAraWF6dAhtbVQzAuCRwvYKxEoENakeAe95MuXIlOkk Oct 9 01:00:33.550853 sshd[1616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:00:33.556585 systemd-logind[1456]: New session 5 of user core. Oct 9 01:00:33.567725 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 9 01:00:33.624829 sshd[1616]: pam_unix(sshd:session): session closed for user core Oct 9 01:00:33.637402 systemd[1]: sshd@4-165.232.149.110:22-139.178.68.195:35644.service: Deactivated successfully. Oct 9 01:00:33.639341 systemd[1]: session-5.scope: Deactivated successfully. Oct 9 01:00:33.640740 systemd-logind[1456]: Session 5 logged out. Waiting for processes to exit. Oct 9 01:00:33.645204 systemd[1]: Started sshd@5-165.232.149.110:22-139.178.68.195:35658.service - OpenSSH per-connection server daemon (139.178.68.195:35658). Oct 9 01:00:33.647797 systemd-logind[1456]: Removed session 5. Oct 9 01:00:33.693644 sshd[1623]: Accepted publickey for core from 139.178.68.195 port 35658 ssh2: RSA SHA256:rAraWF6dAhtbVQzAuCRwvYKxEoENakeAe95MuXIlOkk Oct 9 01:00:33.695916 sshd[1623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:00:33.703277 systemd-logind[1456]: New session 6 of user core. Oct 9 01:00:33.708798 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 9 01:00:33.774959 sshd[1623]: pam_unix(sshd:session): session closed for user core Oct 9 01:00:33.786623 systemd[1]: sshd@5-165.232.149.110:22-139.178.68.195:35658.service: Deactivated successfully. Oct 9 01:00:33.788611 systemd[1]: session-6.scope: Deactivated successfully. Oct 9 01:00:33.790402 systemd-logind[1456]: Session 6 logged out. Waiting for processes to exit. Oct 9 01:00:33.795894 systemd[1]: Started sshd@6-165.232.149.110:22-139.178.68.195:35674.service - OpenSSH per-connection server daemon (139.178.68.195:35674). Oct 9 01:00:33.798348 systemd-logind[1456]: Removed session 6. Oct 9 01:00:33.842064 sshd[1631]: Accepted publickey for core from 139.178.68.195 port 35674 ssh2: RSA SHA256:rAraWF6dAhtbVQzAuCRwvYKxEoENakeAe95MuXIlOkk Oct 9 01:00:33.843802 sshd[1631]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:00:33.848969 systemd-logind[1456]: New session 7 of user core. Oct 9 01:00:33.858767 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 9 01:00:33.935464 sudo[1634]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 9 01:00:33.936876 sudo[1634]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 9 01:00:33.954424 sudo[1634]: pam_unix(sudo:session): session closed for user root Oct 9 01:00:33.959246 sshd[1631]: pam_unix(sshd:session): session closed for user core Oct 9 01:00:33.969806 systemd[1]: sshd@6-165.232.149.110:22-139.178.68.195:35674.service: Deactivated successfully. Oct 9 01:00:33.971950 systemd[1]: session-7.scope: Deactivated successfully. Oct 9 01:00:33.974667 systemd-logind[1456]: Session 7 logged out. Waiting for processes to exit. Oct 9 01:00:33.982942 systemd[1]: Started sshd@7-165.232.149.110:22-139.178.68.195:35690.service - OpenSSH per-connection server daemon (139.178.68.195:35690). Oct 9 01:00:33.984988 systemd-logind[1456]: Removed session 7. Oct 9 01:00:34.025687 sshd[1639]: Accepted publickey for core from 139.178.68.195 port 35690 ssh2: RSA SHA256:rAraWF6dAhtbVQzAuCRwvYKxEoENakeAe95MuXIlOkk Oct 9 01:00:34.027955 sshd[1639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:00:34.035560 systemd-logind[1456]: New session 8 of user core. Oct 9 01:00:34.040778 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 9 01:00:34.102818 sudo[1643]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 9 01:00:34.103325 sudo[1643]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 9 01:00:34.108593 sudo[1643]: pam_unix(sudo:session): session closed for user root Oct 9 01:00:34.117403 sudo[1642]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 9 01:00:34.118372 sudo[1642]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 9 01:00:34.138077 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 9 01:00:34.176118 augenrules[1665]: No rules Oct 9 01:00:34.177752 systemd[1]: audit-rules.service: Deactivated successfully. Oct 9 01:00:34.178070 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 9 01:00:34.179671 sudo[1642]: pam_unix(sudo:session): session closed for user root Oct 9 01:00:34.183706 sshd[1639]: pam_unix(sshd:session): session closed for user core Oct 9 01:00:34.194068 systemd[1]: sshd@7-165.232.149.110:22-139.178.68.195:35690.service: Deactivated successfully. Oct 9 01:00:34.196649 systemd[1]: session-8.scope: Deactivated successfully. Oct 9 01:00:34.198756 systemd-logind[1456]: Session 8 logged out. Waiting for processes to exit. Oct 9 01:00:34.203845 systemd[1]: Started sshd@8-165.232.149.110:22-139.178.68.195:35700.service - OpenSSH per-connection server daemon (139.178.68.195:35700). Oct 9 01:00:34.206415 systemd-logind[1456]: Removed session 8. Oct 9 01:00:34.249528 sshd[1673]: Accepted publickey for core from 139.178.68.195 port 35700 ssh2: RSA SHA256:rAraWF6dAhtbVQzAuCRwvYKxEoENakeAe95MuXIlOkk Oct 9 01:00:34.251697 sshd[1673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:00:34.258471 systemd-logind[1456]: New session 9 of user core. Oct 9 01:00:34.275799 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 9 01:00:34.336642 sudo[1676]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 9 01:00:34.336973 sudo[1676]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 9 01:00:34.636463 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 9 01:00:34.646567 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:00:34.809566 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 9 01:00:34.811736 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:00:34.815829 (dockerd)[1702]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 9 01:00:34.823014 (kubelet)[1703]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 01:00:34.927165 kubelet[1703]: E1009 01:00:34.922036 1703 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 01:00:34.929241 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 01:00:34.929509 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 01:00:35.247401 dockerd[1702]: time="2024-10-09T01:00:35.244768915Z" level=info msg="Starting up" Oct 9 01:00:35.408601 dockerd[1702]: time="2024-10-09T01:00:35.408522439Z" level=info msg="Loading containers: start." Oct 9 01:00:35.621471 kernel: Initializing XFRM netlink socket Oct 9 01:00:35.651644 systemd-timesyncd[1344]: Network configuration changed, trying to establish connection. Oct 9 01:00:35.713259 systemd-timesyncd[1344]: Contacted time server 45.79.214.107:123 (2.flatcar.pool.ntp.org). Oct 9 01:00:35.713319 systemd-timesyncd[1344]: Initial clock synchronization to Wed 2024-10-09 01:00:35.774039 UTC. Oct 9 01:00:35.728575 systemd-networkd[1378]: docker0: Link UP Oct 9 01:00:35.765888 dockerd[1702]: time="2024-10-09T01:00:35.765719572Z" level=info msg="Loading containers: done." Oct 9 01:00:35.789621 dockerd[1702]: time="2024-10-09T01:00:35.789065731Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 9 01:00:35.789621 dockerd[1702]: time="2024-10-09T01:00:35.789196382Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Oct 9 01:00:35.789621 dockerd[1702]: time="2024-10-09T01:00:35.789339450Z" level=info msg="Daemon has completed initialization" Oct 9 01:00:35.844356 dockerd[1702]: time="2024-10-09T01:00:35.844119618Z" level=info msg="API listen on /run/docker.sock" Oct 9 01:00:35.844787 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 9 01:00:36.527027 containerd[1474]: time="2024-10-09T01:00:36.526972185Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.0\"" Oct 9 01:00:37.143936 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3167475063.mount: Deactivated successfully. Oct 9 01:00:38.456411 containerd[1474]: time="2024-10-09T01:00:38.456340669Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:00:38.458199 containerd[1474]: time="2024-10-09T01:00:38.458105127Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.0: active requests=0, bytes read=28066621" Oct 9 01:00:38.460216 containerd[1474]: time="2024-10-09T01:00:38.460168453Z" level=info msg="ImageCreate event name:\"sha256:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:00:38.465477 containerd[1474]: time="2024-10-09T01:00:38.465291029Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:00:38.467306 containerd[1474]: time="2024-10-09T01:00:38.466309043Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.0\" with image id \"sha256:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.0\", repo digest \"registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf\", size \"28063421\" in 1.939288898s" Oct 9 01:00:38.467306 containerd[1474]: time="2024-10-09T01:00:38.466363633Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.0\" returns image reference \"sha256:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3\"" Oct 9 01:00:38.470077 containerd[1474]: time="2024-10-09T01:00:38.469761194Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.0\"" Oct 9 01:00:40.056455 containerd[1474]: time="2024-10-09T01:00:40.055070850Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:00:40.056455 containerd[1474]: time="2024-10-09T01:00:40.056354876Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.0: active requests=0, bytes read=24690922" Oct 9 01:00:40.057154 containerd[1474]: time="2024-10-09T01:00:40.057115616Z" level=info msg="ImageCreate event name:\"sha256:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:00:40.061005 containerd[1474]: time="2024-10-09T01:00:40.060953606Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:00:40.062239 containerd[1474]: time="2024-10-09T01:00:40.062198594Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.0\" with image id \"sha256:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.0\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d\", size \"26240868\" in 1.592389817s" Oct 9 01:00:40.062376 containerd[1474]: time="2024-10-09T01:00:40.062361650Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.0\" returns image reference \"sha256:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1\"" Oct 9 01:00:40.064018 containerd[1474]: time="2024-10-09T01:00:40.063930864Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.0\"" Oct 9 01:00:41.342552 containerd[1474]: time="2024-10-09T01:00:41.342469358Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:00:41.343916 containerd[1474]: time="2024-10-09T01:00:41.343817112Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.0: active requests=0, bytes read=18646758" Oct 9 01:00:41.346182 containerd[1474]: time="2024-10-09T01:00:41.344952954Z" level=info msg="ImageCreate event name:\"sha256:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:00:41.348469 containerd[1474]: time="2024-10-09T01:00:41.348377172Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:00:41.350077 containerd[1474]: time="2024-10-09T01:00:41.349858153Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.0\" with image id \"sha256:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.0\", repo digest \"registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808\", size \"20196722\" in 1.285693132s" Oct 9 01:00:41.350077 containerd[1474]: time="2024-10-09T01:00:41.349897891Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.0\" returns image reference \"sha256:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94\"" Oct 9 01:00:41.350927 containerd[1474]: time="2024-10-09T01:00:41.350785631Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.0\"" Oct 9 01:00:41.809072 systemd-resolved[1327]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Oct 9 01:00:42.661561 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2911875277.mount: Deactivated successfully. Oct 9 01:00:43.207571 containerd[1474]: time="2024-10-09T01:00:43.207366388Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:00:43.208675 containerd[1474]: time="2024-10-09T01:00:43.208461613Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.0: active requests=0, bytes read=30208881" Oct 9 01:00:43.209385 containerd[1474]: time="2024-10-09T01:00:43.209321856Z" level=info msg="ImageCreate event name:\"sha256:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:00:43.211841 containerd[1474]: time="2024-10-09T01:00:43.211781463Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:00:43.212614 containerd[1474]: time="2024-10-09T01:00:43.212326904Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.0\" with image id \"sha256:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494\", repo tag \"registry.k8s.io/kube-proxy:v1.31.0\", repo digest \"registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe\", size \"30207900\" in 1.861489354s" Oct 9 01:00:43.212614 containerd[1474]: time="2024-10-09T01:00:43.212367483Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.0\" returns image reference \"sha256:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494\"" Oct 9 01:00:43.213309 containerd[1474]: time="2024-10-09T01:00:43.213290740Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Oct 9 01:00:43.749266 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2118338350.mount: Deactivated successfully. Oct 9 01:00:44.663297 containerd[1474]: time="2024-10-09T01:00:44.663221464Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:00:44.665266 containerd[1474]: time="2024-10-09T01:00:44.664927549Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Oct 9 01:00:44.666323 containerd[1474]: time="2024-10-09T01:00:44.666226465Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:00:44.672533 containerd[1474]: time="2024-10-09T01:00:44.670965680Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:00:44.672533 containerd[1474]: time="2024-10-09T01:00:44.672234820Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.458870566s" Oct 9 01:00:44.672533 containerd[1474]: time="2024-10-09T01:00:44.672277102Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Oct 9 01:00:44.673046 containerd[1474]: time="2024-10-09T01:00:44.673016814Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Oct 9 01:00:44.909776 systemd-resolved[1327]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Oct 9 01:00:45.179785 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 9 01:00:45.185722 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:00:45.373549 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount767433470.mount: Deactivated successfully. Oct 9 01:00:45.383745 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:00:45.384183 (kubelet)[2031]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 01:00:45.390358 containerd[1474]: time="2024-10-09T01:00:45.389023049Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:00:45.391504 containerd[1474]: time="2024-10-09T01:00:45.391447170Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Oct 9 01:00:45.392795 containerd[1474]: time="2024-10-09T01:00:45.392733273Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:00:45.396130 containerd[1474]: time="2024-10-09T01:00:45.396083190Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:00:45.396985 containerd[1474]: time="2024-10-09T01:00:45.396941005Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 723.801386ms" Oct 9 01:00:45.396985 containerd[1474]: time="2024-10-09T01:00:45.396986996Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Oct 9 01:00:45.398347 containerd[1474]: time="2024-10-09T01:00:45.398323480Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Oct 9 01:00:45.457994 kubelet[2031]: E1009 01:00:45.457823 2031 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 01:00:45.460994 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 01:00:45.461146 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 01:00:45.921710 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2143172299.mount: Deactivated successfully. Oct 9 01:00:47.739832 containerd[1474]: time="2024-10-09T01:00:47.739770519Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:00:47.741445 containerd[1474]: time="2024-10-09T01:00:47.741150563Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56241740" Oct 9 01:00:47.742226 containerd[1474]: time="2024-10-09T01:00:47.742189409Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:00:47.745557 containerd[1474]: time="2024-10-09T01:00:47.745518101Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:00:47.747328 containerd[1474]: time="2024-10-09T01:00:47.747275539Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.348695055s" Oct 9 01:00:47.747328 containerd[1474]: time="2024-10-09T01:00:47.747330530Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Oct 9 01:00:50.092348 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:00:50.108955 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:00:50.159946 systemd[1]: Reloading requested from client PID 2119 ('systemctl') (unit session-9.scope)... Oct 9 01:00:50.159971 systemd[1]: Reloading... Oct 9 01:00:50.303477 zram_generator::config[2158]: No configuration found. Oct 9 01:00:50.486219 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 01:00:50.592497 systemd[1]: Reloading finished in 432 ms. Oct 9 01:00:50.644663 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 9 01:00:50.644795 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 9 01:00:50.645265 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:00:50.653927 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:00:50.815722 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:00:50.824969 (kubelet)[2211]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 9 01:00:50.899563 kubelet[2211]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 01:00:50.899563 kubelet[2211]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 9 01:00:50.899563 kubelet[2211]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 01:00:50.901085 kubelet[2211]: I1009 01:00:50.900985 2211 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 9 01:00:51.279463 kubelet[2211]: I1009 01:00:51.278828 2211 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Oct 9 01:00:51.279463 kubelet[2211]: I1009 01:00:51.278884 2211 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 9 01:00:51.280110 kubelet[2211]: I1009 01:00:51.280081 2211 server.go:929] "Client rotation is on, will bootstrap in background" Oct 9 01:00:51.308873 kubelet[2211]: I1009 01:00:51.308835 2211 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 9 01:00:51.309855 kubelet[2211]: E1009 01:00:51.309808 2211 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://165.232.149.110:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 165.232.149.110:6443: connect: connection refused" logger="UnhandledError" Oct 9 01:00:51.319508 kubelet[2211]: E1009 01:00:51.319468 2211 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Oct 9 01:00:51.319728 kubelet[2211]: I1009 01:00:51.319716 2211 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Oct 9 01:00:51.326282 kubelet[2211]: I1009 01:00:51.326238 2211 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 9 01:00:51.328296 kubelet[2211]: I1009 01:00:51.328189 2211 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Oct 9 01:00:51.329508 kubelet[2211]: I1009 01:00:51.328780 2211 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 9 01:00:51.329508 kubelet[2211]: I1009 01:00:51.328865 2211 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4116.0.0-d-2a8a4ec573","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 9 01:00:51.329508 kubelet[2211]: I1009 01:00:51.329121 2211 topology_manager.go:138] "Creating topology manager with none policy" Oct 9 01:00:51.329508 kubelet[2211]: I1009 01:00:51.329135 2211 container_manager_linux.go:300] "Creating device plugin manager" Oct 9 01:00:51.329862 kubelet[2211]: I1009 01:00:51.329284 2211 state_mem.go:36] "Initialized new in-memory state store" Oct 9 01:00:51.337237 kubelet[2211]: I1009 01:00:51.336938 2211 kubelet.go:408] "Attempting to sync node with API server" Oct 9 01:00:51.337237 kubelet[2211]: I1009 01:00:51.337028 2211 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 9 01:00:51.337237 kubelet[2211]: I1009 01:00:51.337087 2211 kubelet.go:314] "Adding apiserver pod source" Oct 9 01:00:51.337237 kubelet[2211]: I1009 01:00:51.337110 2211 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 9 01:00:51.345499 kubelet[2211]: W1009 01:00:51.345252 2211 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://165.232.149.110:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4116.0.0-d-2a8a4ec573&limit=500&resourceVersion=0": dial tcp 165.232.149.110:6443: connect: connection refused Oct 9 01:00:51.345499 kubelet[2211]: E1009 01:00:51.345344 2211 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://165.232.149.110:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4116.0.0-d-2a8a4ec573&limit=500&resourceVersion=0\": dial tcp 165.232.149.110:6443: connect: connection refused" logger="UnhandledError" Oct 9 01:00:51.346014 kubelet[2211]: W1009 01:00:51.345964 2211 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://165.232.149.110:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 165.232.149.110:6443: connect: connection refused Oct 9 01:00:51.346053 kubelet[2211]: E1009 01:00:51.346036 2211 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://165.232.149.110:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 165.232.149.110:6443: connect: connection refused" logger="UnhandledError" Oct 9 01:00:51.346490 kubelet[2211]: I1009 01:00:51.346166 2211 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.22" apiVersion="v1" Oct 9 01:00:51.348381 kubelet[2211]: I1009 01:00:51.348216 2211 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 9 01:00:51.352455 kubelet[2211]: W1009 01:00:51.351676 2211 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 9 01:00:51.357687 kubelet[2211]: I1009 01:00:51.357644 2211 server.go:1269] "Started kubelet" Oct 9 01:00:51.361455 kubelet[2211]: I1009 01:00:51.360795 2211 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 9 01:00:51.374527 kubelet[2211]: I1009 01:00:51.373059 2211 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 9 01:00:51.374527 kubelet[2211]: I1009 01:00:51.373219 2211 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Oct 9 01:00:51.375212 kubelet[2211]: I1009 01:00:51.375164 2211 server.go:460] "Adding debug handlers to kubelet server" Oct 9 01:00:51.377026 kubelet[2211]: I1009 01:00:51.376980 2211 volume_manager.go:289] "Starting Kubelet Volume Manager" Oct 9 01:00:51.377373 kubelet[2211]: E1009 01:00:51.377315 2211 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4116.0.0-d-2a8a4ec573\" not found" Oct 9 01:00:51.377373 kubelet[2211]: I1009 01:00:51.377284 2211 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 9 01:00:51.378115 kubelet[2211]: I1009 01:00:51.378085 2211 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 9 01:00:51.379061 kubelet[2211]: I1009 01:00:51.379027 2211 factory.go:221] Registration of the systemd container factory successfully Oct 9 01:00:51.379329 kubelet[2211]: I1009 01:00:51.379296 2211 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 9 01:00:51.384870 kubelet[2211]: I1009 01:00:51.384492 2211 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 9 01:00:51.384870 kubelet[2211]: I1009 01:00:51.384603 2211 reconciler.go:26] "Reconciler: start to sync state" Oct 9 01:00:51.385554 kubelet[2211]: E1009 01:00:51.379714 2211 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://165.232.149.110:6443/api/v1/namespaces/default/events\": dial tcp 165.232.149.110:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4116.0.0-d-2a8a4ec573.17fca311384a4f7b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4116.0.0-d-2a8a4ec573,UID:ci-4116.0.0-d-2a8a4ec573,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4116.0.0-d-2a8a4ec573,},FirstTimestamp:2024-10-09 01:00:51.357593467 +0000 UTC m=+0.526874764,LastTimestamp:2024-10-09 01:00:51.357593467 +0000 UTC m=+0.526874764,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4116.0.0-d-2a8a4ec573,}" Oct 9 01:00:51.388326 kubelet[2211]: E1009 01:00:51.386536 2211 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://165.232.149.110:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4116.0.0-d-2a8a4ec573?timeout=10s\": dial tcp 165.232.149.110:6443: connect: connection refused" interval="200ms" Oct 9 01:00:51.388326 kubelet[2211]: W1009 01:00:51.387370 2211 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://165.232.149.110:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 165.232.149.110:6443: connect: connection refused Oct 9 01:00:51.388326 kubelet[2211]: E1009 01:00:51.387475 2211 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://165.232.149.110:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 165.232.149.110:6443: connect: connection refused" logger="UnhandledError" Oct 9 01:00:51.389427 kubelet[2211]: I1009 01:00:51.389377 2211 factory.go:221] Registration of the containerd container factory successfully Oct 9 01:00:51.399055 kubelet[2211]: I1009 01:00:51.399001 2211 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 9 01:00:51.402645 kubelet[2211]: I1009 01:00:51.402606 2211 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 9 01:00:51.402960 kubelet[2211]: I1009 01:00:51.402850 2211 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 9 01:00:51.403134 kubelet[2211]: I1009 01:00:51.403121 2211 kubelet.go:2321] "Starting kubelet main sync loop" Oct 9 01:00:51.403288 kubelet[2211]: E1009 01:00:51.403257 2211 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 9 01:00:51.414062 kubelet[2211]: W1009 01:00:51.413981 2211 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://165.232.149.110:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 165.232.149.110:6443: connect: connection refused Oct 9 01:00:51.414271 kubelet[2211]: E1009 01:00:51.414249 2211 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://165.232.149.110:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 165.232.149.110:6443: connect: connection refused" logger="UnhandledError" Oct 9 01:00:51.415405 kubelet[2211]: E1009 01:00:51.415319 2211 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 9 01:00:51.425614 kubelet[2211]: I1009 01:00:51.425578 2211 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 9 01:00:51.425614 kubelet[2211]: I1009 01:00:51.425603 2211 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 9 01:00:51.425614 kubelet[2211]: I1009 01:00:51.425633 2211 state_mem.go:36] "Initialized new in-memory state store" Oct 9 01:00:51.430768 kubelet[2211]: I1009 01:00:51.430388 2211 policy_none.go:49] "None policy: Start" Oct 9 01:00:51.431539 kubelet[2211]: I1009 01:00:51.431470 2211 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 9 01:00:51.431645 kubelet[2211]: I1009 01:00:51.431574 2211 state_mem.go:35] "Initializing new in-memory state store" Oct 9 01:00:51.443051 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 9 01:00:51.456344 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 9 01:00:51.462617 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 9 01:00:51.473492 kubelet[2211]: I1009 01:00:51.473170 2211 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 9 01:00:51.474542 kubelet[2211]: I1009 01:00:51.474497 2211 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 9 01:00:51.474667 kubelet[2211]: I1009 01:00:51.474529 2211 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 9 01:00:51.477060 kubelet[2211]: I1009 01:00:51.476200 2211 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 9 01:00:51.484670 kubelet[2211]: E1009 01:00:51.483402 2211 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4116.0.0-d-2a8a4ec573\" not found" Oct 9 01:00:51.518630 systemd[1]: Created slice kubepods-burstable-pod3e557e90d72879715115acb3d2a3d14c.slice - libcontainer container kubepods-burstable-pod3e557e90d72879715115acb3d2a3d14c.slice. Oct 9 01:00:51.532807 systemd[1]: Created slice kubepods-burstable-poda4faaa7ec49b71c380fbb7489ff04682.slice - libcontainer container kubepods-burstable-poda4faaa7ec49b71c380fbb7489ff04682.slice. Oct 9 01:00:51.551411 systemd[1]: Created slice kubepods-burstable-pod7e5556cc87d8184eaa3b3251c3a01738.slice - libcontainer container kubepods-burstable-pod7e5556cc87d8184eaa3b3251c3a01738.slice. Oct 9 01:00:51.577152 kubelet[2211]: I1009 01:00:51.576718 2211 kubelet_node_status.go:72] "Attempting to register node" node="ci-4116.0.0-d-2a8a4ec573" Oct 9 01:00:51.577152 kubelet[2211]: E1009 01:00:51.577089 2211 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://165.232.149.110:6443/api/v1/nodes\": dial tcp 165.232.149.110:6443: connect: connection refused" node="ci-4116.0.0-d-2a8a4ec573" Oct 9 01:00:51.588151 kubelet[2211]: E1009 01:00:51.588077 2211 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://165.232.149.110:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4116.0.0-d-2a8a4ec573?timeout=10s\": dial tcp 165.232.149.110:6443: connect: connection refused" interval="400ms" Oct 9 01:00:51.685696 kubelet[2211]: I1009 01:00:51.685635 2211 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3e557e90d72879715115acb3d2a3d14c-k8s-certs\") pod \"kube-apiserver-ci-4116.0.0-d-2a8a4ec573\" (UID: \"3e557e90d72879715115acb3d2a3d14c\") " pod="kube-system/kube-apiserver-ci-4116.0.0-d-2a8a4ec573" Oct 9 01:00:51.685696 kubelet[2211]: I1009 01:00:51.685686 2211 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3e557e90d72879715115acb3d2a3d14c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4116.0.0-d-2a8a4ec573\" (UID: \"3e557e90d72879715115acb3d2a3d14c\") " pod="kube-system/kube-apiserver-ci-4116.0.0-d-2a8a4ec573" Oct 9 01:00:51.685696 kubelet[2211]: I1009 01:00:51.685711 2211 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a4faaa7ec49b71c380fbb7489ff04682-k8s-certs\") pod \"kube-controller-manager-ci-4116.0.0-d-2a8a4ec573\" (UID: \"a4faaa7ec49b71c380fbb7489ff04682\") " pod="kube-system/kube-controller-manager-ci-4116.0.0-d-2a8a4ec573" Oct 9 01:00:51.685696 kubelet[2211]: I1009 01:00:51.685729 2211 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a4faaa7ec49b71c380fbb7489ff04682-kubeconfig\") pod \"kube-controller-manager-ci-4116.0.0-d-2a8a4ec573\" (UID: \"a4faaa7ec49b71c380fbb7489ff04682\") " pod="kube-system/kube-controller-manager-ci-4116.0.0-d-2a8a4ec573" Oct 9 01:00:51.686133 kubelet[2211]: I1009 01:00:51.685746 2211 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7e5556cc87d8184eaa3b3251c3a01738-kubeconfig\") pod \"kube-scheduler-ci-4116.0.0-d-2a8a4ec573\" (UID: \"7e5556cc87d8184eaa3b3251c3a01738\") " pod="kube-system/kube-scheduler-ci-4116.0.0-d-2a8a4ec573" Oct 9 01:00:51.686133 kubelet[2211]: I1009 01:00:51.685761 2211 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3e557e90d72879715115acb3d2a3d14c-ca-certs\") pod \"kube-apiserver-ci-4116.0.0-d-2a8a4ec573\" (UID: \"3e557e90d72879715115acb3d2a3d14c\") " pod="kube-system/kube-apiserver-ci-4116.0.0-d-2a8a4ec573" Oct 9 01:00:51.686133 kubelet[2211]: I1009 01:00:51.685776 2211 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a4faaa7ec49b71c380fbb7489ff04682-ca-certs\") pod \"kube-controller-manager-ci-4116.0.0-d-2a8a4ec573\" (UID: \"a4faaa7ec49b71c380fbb7489ff04682\") " pod="kube-system/kube-controller-manager-ci-4116.0.0-d-2a8a4ec573" Oct 9 01:00:51.686133 kubelet[2211]: I1009 01:00:51.685805 2211 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a4faaa7ec49b71c380fbb7489ff04682-flexvolume-dir\") pod \"kube-controller-manager-ci-4116.0.0-d-2a8a4ec573\" (UID: \"a4faaa7ec49b71c380fbb7489ff04682\") " pod="kube-system/kube-controller-manager-ci-4116.0.0-d-2a8a4ec573" Oct 9 01:00:51.686133 kubelet[2211]: I1009 01:00:51.685822 2211 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a4faaa7ec49b71c380fbb7489ff04682-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4116.0.0-d-2a8a4ec573\" (UID: \"a4faaa7ec49b71c380fbb7489ff04682\") " pod="kube-system/kube-controller-manager-ci-4116.0.0-d-2a8a4ec573" Oct 9 01:00:51.779497 kubelet[2211]: I1009 01:00:51.779405 2211 kubelet_node_status.go:72] "Attempting to register node" node="ci-4116.0.0-d-2a8a4ec573" Oct 9 01:00:51.780096 kubelet[2211]: E1009 01:00:51.780043 2211 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://165.232.149.110:6443/api/v1/nodes\": dial tcp 165.232.149.110:6443: connect: connection refused" node="ci-4116.0.0-d-2a8a4ec573" Oct 9 01:00:51.830577 kubelet[2211]: E1009 01:00:51.830390 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 01:00:51.831787 containerd[1474]: time="2024-10-09T01:00:51.831428564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4116.0.0-d-2a8a4ec573,Uid:3e557e90d72879715115acb3d2a3d14c,Namespace:kube-system,Attempt:0,}" Oct 9 01:00:51.834020 systemd-resolved[1327]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. Oct 9 01:00:51.838793 kubelet[2211]: E1009 01:00:51.838682 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 01:00:51.839810 containerd[1474]: time="2024-10-09T01:00:51.839656037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4116.0.0-d-2a8a4ec573,Uid:a4faaa7ec49b71c380fbb7489ff04682,Namespace:kube-system,Attempt:0,}" Oct 9 01:00:51.855401 kubelet[2211]: E1009 01:00:51.854837 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 01:00:51.855770 containerd[1474]: time="2024-10-09T01:00:51.855695368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4116.0.0-d-2a8a4ec573,Uid:7e5556cc87d8184eaa3b3251c3a01738,Namespace:kube-system,Attempt:0,}" Oct 9 01:00:51.989728 kubelet[2211]: E1009 01:00:51.989667 2211 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://165.232.149.110:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4116.0.0-d-2a8a4ec573?timeout=10s\": dial tcp 165.232.149.110:6443: connect: connection refused" interval="800ms" Oct 9 01:00:52.181795 kubelet[2211]: I1009 01:00:52.181729 2211 kubelet_node_status.go:72] "Attempting to register node" node="ci-4116.0.0-d-2a8a4ec573" Oct 9 01:00:52.184583 kubelet[2211]: E1009 01:00:52.184525 2211 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://165.232.149.110:6443/api/v1/nodes\": dial tcp 165.232.149.110:6443: connect: connection refused" node="ci-4116.0.0-d-2a8a4ec573" Oct 9 01:00:52.217254 kubelet[2211]: W1009 01:00:52.217147 2211 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://165.232.149.110:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 165.232.149.110:6443: connect: connection refused Oct 9 01:00:52.217254 kubelet[2211]: E1009 01:00:52.217246 2211 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://165.232.149.110:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 165.232.149.110:6443: connect: connection refused" logger="UnhandledError" Oct 9 01:00:52.538159 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3748815827.mount: Deactivated successfully. Oct 9 01:00:52.556772 containerd[1474]: time="2024-10-09T01:00:52.556709201Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 01:00:52.558315 containerd[1474]: time="2024-10-09T01:00:52.558110314Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 01:00:52.558680 containerd[1474]: time="2024-10-09T01:00:52.558627217Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Oct 9 01:00:52.560358 containerd[1474]: time="2024-10-09T01:00:52.559758361Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 01:00:52.560358 containerd[1474]: time="2024-10-09T01:00:52.560300888Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 9 01:00:52.565558 containerd[1474]: time="2024-10-09T01:00:52.565502572Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 01:00:52.567100 containerd[1474]: time="2024-10-09T01:00:52.567035172Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 9 01:00:52.568303 containerd[1474]: time="2024-10-09T01:00:52.568259192Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 01:00:52.569453 containerd[1474]: time="2024-10-09T01:00:52.569307086Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 737.716329ms" Oct 9 01:00:52.574117 containerd[1474]: time="2024-10-09T01:00:52.574068539Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 734.265471ms" Oct 9 01:00:52.577061 containerd[1474]: time="2024-10-09T01:00:52.576877712Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 721.083288ms" Oct 9 01:00:52.716789 kubelet[2211]: W1009 01:00:52.716562 2211 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://165.232.149.110:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 165.232.149.110:6443: connect: connection refused Oct 9 01:00:52.716789 kubelet[2211]: E1009 01:00:52.716730 2211 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://165.232.149.110:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 165.232.149.110:6443: connect: connection refused" logger="UnhandledError" Oct 9 01:00:52.736992 kubelet[2211]: W1009 01:00:52.736690 2211 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://165.232.149.110:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4116.0.0-d-2a8a4ec573&limit=500&resourceVersion=0": dial tcp 165.232.149.110:6443: connect: connection refused Oct 9 01:00:52.736992 kubelet[2211]: E1009 01:00:52.736777 2211 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://165.232.149.110:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4116.0.0-d-2a8a4ec573&limit=500&resourceVersion=0\": dial tcp 165.232.149.110:6443: connect: connection refused" logger="UnhandledError" Oct 9 01:00:52.740494 kubelet[2211]: E1009 01:00:52.739541 2211 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://165.232.149.110:6443/api/v1/namespaces/default/events\": dial tcp 165.232.149.110:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4116.0.0-d-2a8a4ec573.17fca311384a4f7b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4116.0.0-d-2a8a4ec573,UID:ci-4116.0.0-d-2a8a4ec573,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4116.0.0-d-2a8a4ec573,},FirstTimestamp:2024-10-09 01:00:51.357593467 +0000 UTC m=+0.526874764,LastTimestamp:2024-10-09 01:00:51.357593467 +0000 UTC m=+0.526874764,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4116.0.0-d-2a8a4ec573,}" Oct 9 01:00:52.762869 containerd[1474]: time="2024-10-09T01:00:52.753953442Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:00:52.762869 containerd[1474]: time="2024-10-09T01:00:52.762629135Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:00:52.762869 containerd[1474]: time="2024-10-09T01:00:52.762646169Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:00:52.762869 containerd[1474]: time="2024-10-09T01:00:52.762781989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:00:52.766136 containerd[1474]: time="2024-10-09T01:00:52.766029168Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:00:52.766295 containerd[1474]: time="2024-10-09T01:00:52.766249877Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:00:52.766343 containerd[1474]: time="2024-10-09T01:00:52.766307737Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:00:52.766521 containerd[1474]: time="2024-10-09T01:00:52.766487083Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:00:52.777705 containerd[1474]: time="2024-10-09T01:00:52.777483546Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:00:52.777705 containerd[1474]: time="2024-10-09T01:00:52.777577479Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:00:52.777705 containerd[1474]: time="2024-10-09T01:00:52.777600502Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:00:52.778260 containerd[1474]: time="2024-10-09T01:00:52.778095545Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:00:52.792575 kubelet[2211]: E1009 01:00:52.791116 2211 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://165.232.149.110:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4116.0.0-d-2a8a4ec573?timeout=10s\": dial tcp 165.232.149.110:6443: connect: connection refused" interval="1.6s" Oct 9 01:00:52.803674 systemd[1]: Started cri-containerd-1bd1c0a6d8d59cfcc0a2e01494f676197a1c188274d46eff97aeae2a3aec10ac.scope - libcontainer container 1bd1c0a6d8d59cfcc0a2e01494f676197a1c188274d46eff97aeae2a3aec10ac. Oct 9 01:00:52.829709 systemd[1]: Started cri-containerd-3c15b76ab11a156e7d64174371950cbe5655d805c8f56a91231fc9eb2b069b6b.scope - libcontainer container 3c15b76ab11a156e7d64174371950cbe5655d805c8f56a91231fc9eb2b069b6b. Oct 9 01:00:52.844735 systemd[1]: Started cri-containerd-46bfbfb050307008b4c868fb9fdaff6a7ea70e10ccb3609c7d0fe190787d3d19.scope - libcontainer container 46bfbfb050307008b4c868fb9fdaff6a7ea70e10ccb3609c7d0fe190787d3d19. Oct 9 01:00:52.856647 kubelet[2211]: W1009 01:00:52.856464 2211 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://165.232.149.110:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 165.232.149.110:6443: connect: connection refused Oct 9 01:00:52.857704 kubelet[2211]: E1009 01:00:52.857196 2211 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://165.232.149.110:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 165.232.149.110:6443: connect: connection refused" logger="UnhandledError" Oct 9 01:00:52.926927 containerd[1474]: time="2024-10-09T01:00:52.926757194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4116.0.0-d-2a8a4ec573,Uid:7e5556cc87d8184eaa3b3251c3a01738,Namespace:kube-system,Attempt:0,} returns sandbox id \"3c15b76ab11a156e7d64174371950cbe5655d805c8f56a91231fc9eb2b069b6b\"" Oct 9 01:00:52.930067 kubelet[2211]: E1009 01:00:52.929540 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 01:00:52.937644 containerd[1474]: time="2024-10-09T01:00:52.937241054Z" level=info msg="CreateContainer within sandbox \"3c15b76ab11a156e7d64174371950cbe5655d805c8f56a91231fc9eb2b069b6b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 9 01:00:52.938052 containerd[1474]: time="2024-10-09T01:00:52.938015116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4116.0.0-d-2a8a4ec573,Uid:a4faaa7ec49b71c380fbb7489ff04682,Namespace:kube-system,Attempt:0,} returns sandbox id \"1bd1c0a6d8d59cfcc0a2e01494f676197a1c188274d46eff97aeae2a3aec10ac\"" Oct 9 01:00:52.939603 kubelet[2211]: E1009 01:00:52.939298 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 01:00:52.944643 containerd[1474]: time="2024-10-09T01:00:52.944593746Z" level=info msg="CreateContainer within sandbox \"1bd1c0a6d8d59cfcc0a2e01494f676197a1c188274d46eff97aeae2a3aec10ac\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 9 01:00:52.953279 containerd[1474]: time="2024-10-09T01:00:52.953236574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4116.0.0-d-2a8a4ec573,Uid:3e557e90d72879715115acb3d2a3d14c,Namespace:kube-system,Attempt:0,} returns sandbox id \"46bfbfb050307008b4c868fb9fdaff6a7ea70e10ccb3609c7d0fe190787d3d19\"" Oct 9 01:00:52.954052 kubelet[2211]: E1009 01:00:52.954021 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 01:00:52.956262 containerd[1474]: time="2024-10-09T01:00:52.956200050Z" level=info msg="CreateContainer within sandbox \"46bfbfb050307008b4c868fb9fdaff6a7ea70e10ccb3609c7d0fe190787d3d19\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 9 01:00:52.976332 containerd[1474]: time="2024-10-09T01:00:52.975895627Z" level=info msg="CreateContainer within sandbox \"3c15b76ab11a156e7d64174371950cbe5655d805c8f56a91231fc9eb2b069b6b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f0d6bc49afad0cf4f902c15285f99eae40d5489e5b28221a9824f3c134eb9448\"" Oct 9 01:00:52.977114 containerd[1474]: time="2024-10-09T01:00:52.977065576Z" level=info msg="StartContainer for \"f0d6bc49afad0cf4f902c15285f99eae40d5489e5b28221a9824f3c134eb9448\"" Oct 9 01:00:52.987813 kubelet[2211]: I1009 01:00:52.987766 2211 kubelet_node_status.go:72] "Attempting to register node" node="ci-4116.0.0-d-2a8a4ec573" Oct 9 01:00:52.989124 kubelet[2211]: E1009 01:00:52.988964 2211 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://165.232.149.110:6443/api/v1/nodes\": dial tcp 165.232.149.110:6443: connect: connection refused" node="ci-4116.0.0-d-2a8a4ec573" Oct 9 01:00:52.989253 containerd[1474]: time="2024-10-09T01:00:52.988277129Z" level=info msg="CreateContainer within sandbox \"1bd1c0a6d8d59cfcc0a2e01494f676197a1c188274d46eff97aeae2a3aec10ac\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3508c94105f2e0b5d536d4c48cb35a478f814386e2a578de7484d202fe8c428c\"" Oct 9 01:00:52.989253 containerd[1474]: time="2024-10-09T01:00:52.988897945Z" level=info msg="StartContainer for \"3508c94105f2e0b5d536d4c48cb35a478f814386e2a578de7484d202fe8c428c\"" Oct 9 01:00:52.994095 containerd[1474]: time="2024-10-09T01:00:52.994016330Z" level=info msg="CreateContainer within sandbox \"46bfbfb050307008b4c868fb9fdaff6a7ea70e10ccb3609c7d0fe190787d3d19\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5e5a4a378a756e5d9e7b9e7686a6f4c389b4661ef6fe19d9dc6150658f3d3974\"" Oct 9 01:00:52.995649 containerd[1474]: time="2024-10-09T01:00:52.995541027Z" level=info msg="StartContainer for \"5e5a4a378a756e5d9e7b9e7686a6f4c389b4661ef6fe19d9dc6150658f3d3974\"" Oct 9 01:00:53.026694 systemd[1]: Started cri-containerd-f0d6bc49afad0cf4f902c15285f99eae40d5489e5b28221a9824f3c134eb9448.scope - libcontainer container f0d6bc49afad0cf4f902c15285f99eae40d5489e5b28221a9824f3c134eb9448. Oct 9 01:00:53.045804 systemd[1]: Started cri-containerd-3508c94105f2e0b5d536d4c48cb35a478f814386e2a578de7484d202fe8c428c.scope - libcontainer container 3508c94105f2e0b5d536d4c48cb35a478f814386e2a578de7484d202fe8c428c. Oct 9 01:00:53.066829 systemd[1]: Started cri-containerd-5e5a4a378a756e5d9e7b9e7686a6f4c389b4661ef6fe19d9dc6150658f3d3974.scope - libcontainer container 5e5a4a378a756e5d9e7b9e7686a6f4c389b4661ef6fe19d9dc6150658f3d3974. Oct 9 01:00:53.150848 containerd[1474]: time="2024-10-09T01:00:53.150574986Z" level=info msg="StartContainer for \"5e5a4a378a756e5d9e7b9e7686a6f4c389b4661ef6fe19d9dc6150658f3d3974\" returns successfully" Oct 9 01:00:53.163663 containerd[1474]: time="2024-10-09T01:00:53.163000037Z" level=info msg="StartContainer for \"3508c94105f2e0b5d536d4c48cb35a478f814386e2a578de7484d202fe8c428c\" returns successfully" Oct 9 01:00:53.165167 containerd[1474]: time="2024-10-09T01:00:53.165116243Z" level=info msg="StartContainer for \"f0d6bc49afad0cf4f902c15285f99eae40d5489e5b28221a9824f3c134eb9448\" returns successfully" Oct 9 01:00:53.422505 kubelet[2211]: E1009 01:00:53.422390 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 01:00:53.427157 kubelet[2211]: E1009 01:00:53.427097 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 01:00:53.430149 kubelet[2211]: E1009 01:00:53.430071 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 01:00:54.431705 kubelet[2211]: E1009 01:00:54.431658 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 01:00:54.590936 kubelet[2211]: I1009 01:00:54.590892 2211 kubelet_node_status.go:72] "Attempting to register node" node="ci-4116.0.0-d-2a8a4ec573" Oct 9 01:00:55.247462 kubelet[2211]: E1009 01:00:55.247362 2211 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4116.0.0-d-2a8a4ec573\" not found" node="ci-4116.0.0-d-2a8a4ec573" Oct 9 01:00:55.348192 kubelet[2211]: I1009 01:00:55.347879 2211 apiserver.go:52] "Watching apiserver" Oct 9 01:00:55.384862 kubelet[2211]: I1009 01:00:55.384762 2211 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 9 01:00:55.418840 kubelet[2211]: I1009 01:00:55.418782 2211 kubelet_node_status.go:75] "Successfully registered node" node="ci-4116.0.0-d-2a8a4ec573" Oct 9 01:00:55.448094 kubelet[2211]: E1009 01:00:55.448046 2211 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4116.0.0-d-2a8a4ec573\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4116.0.0-d-2a8a4ec573" Oct 9 01:00:55.448674 kubelet[2211]: E1009 01:00:55.448255 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 01:00:57.214087 kubelet[2211]: W1009 01:00:57.214012 2211 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 9 01:00:57.214593 kubelet[2211]: E1009 01:00:57.214313 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 01:00:57.438333 kubelet[2211]: E1009 01:00:57.437597 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 01:00:57.791140 systemd[1]: Reloading requested from client PID 2488 ('systemctl') (unit session-9.scope)... Oct 9 01:00:57.791163 systemd[1]: Reloading... Oct 9 01:00:57.948501 zram_generator::config[2536]: No configuration found. Oct 9 01:00:58.092367 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 01:00:58.192036 systemd[1]: Reloading finished in 400 ms. Oct 9 01:00:58.239083 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:00:58.240216 kubelet[2211]: I1009 01:00:58.240066 2211 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 9 01:00:58.255105 systemd[1]: kubelet.service: Deactivated successfully. Oct 9 01:00:58.255482 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:00:58.266810 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:00:58.432727 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:00:58.444932 (kubelet)[2577]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 9 01:00:58.525858 kubelet[2577]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 01:00:58.526256 kubelet[2577]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 9 01:00:58.526323 kubelet[2577]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 01:00:58.526487 kubelet[2577]: I1009 01:00:58.526446 2577 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 9 01:00:58.536841 kubelet[2577]: I1009 01:00:58.536787 2577 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Oct 9 01:00:58.537058 kubelet[2577]: I1009 01:00:58.537041 2577 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 9 01:00:58.537415 kubelet[2577]: I1009 01:00:58.537395 2577 server.go:929] "Client rotation is on, will bootstrap in background" Oct 9 01:00:58.539114 kubelet[2577]: I1009 01:00:58.539083 2577 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 9 01:00:58.541512 kubelet[2577]: I1009 01:00:58.541484 2577 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 9 01:00:58.545823 kubelet[2577]: E1009 01:00:58.545784 2577 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Oct 9 01:00:58.545823 kubelet[2577]: I1009 01:00:58.545822 2577 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Oct 9 01:00:58.549258 kubelet[2577]: I1009 01:00:58.549225 2577 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 9 01:00:58.549381 kubelet[2577]: I1009 01:00:58.549348 2577 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Oct 9 01:00:58.549512 kubelet[2577]: I1009 01:00:58.549465 2577 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 9 01:00:58.549724 kubelet[2577]: I1009 01:00:58.549509 2577 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4116.0.0-d-2a8a4ec573","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 9 01:00:58.549807 kubelet[2577]: I1009 01:00:58.549730 2577 topology_manager.go:138] "Creating topology manager with none policy" Oct 9 01:00:58.549807 kubelet[2577]: I1009 01:00:58.549740 2577 container_manager_linux.go:300] "Creating device plugin manager" Oct 9 01:00:58.549807 kubelet[2577]: I1009 01:00:58.549779 2577 state_mem.go:36] "Initialized new in-memory state store" Oct 9 01:00:58.549943 kubelet[2577]: I1009 01:00:58.549930 2577 kubelet.go:408] "Attempting to sync node with API server" Oct 9 01:00:58.549980 kubelet[2577]: I1009 01:00:58.549949 2577 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 9 01:00:58.554048 kubelet[2577]: I1009 01:00:58.553951 2577 kubelet.go:314] "Adding apiserver pod source" Oct 9 01:00:58.554048 kubelet[2577]: I1009 01:00:58.553993 2577 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 9 01:00:58.569902 kubelet[2577]: I1009 01:00:58.569865 2577 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.22" apiVersion="v1" Oct 9 01:00:58.570925 kubelet[2577]: I1009 01:00:58.570828 2577 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 9 01:00:58.573460 kubelet[2577]: I1009 01:00:58.572852 2577 server.go:1269] "Started kubelet" Oct 9 01:00:58.574905 kubelet[2577]: I1009 01:00:58.574877 2577 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 9 01:00:58.580546 kubelet[2577]: I1009 01:00:58.580484 2577 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Oct 9 01:00:58.582400 kubelet[2577]: I1009 01:00:58.581627 2577 server.go:460] "Adding debug handlers to kubelet server" Oct 9 01:00:58.584304 kubelet[2577]: I1009 01:00:58.584228 2577 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 9 01:00:58.584627 kubelet[2577]: I1009 01:00:58.584611 2577 volume_manager.go:289] "Starting Kubelet Volume Manager" Oct 9 01:00:58.584958 kubelet[2577]: I1009 01:00:58.584924 2577 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 9 01:00:58.586954 kubelet[2577]: I1009 01:00:58.586933 2577 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 9 01:00:58.587097 kubelet[2577]: I1009 01:00:58.584678 2577 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 9 01:00:58.588391 kubelet[2577]: I1009 01:00:58.588367 2577 reconciler.go:26] "Reconciler: start to sync state" Oct 9 01:00:58.589836 kubelet[2577]: I1009 01:00:58.589806 2577 factory.go:221] Registration of the systemd container factory successfully Oct 9 01:00:58.590056 kubelet[2577]: I1009 01:00:58.590035 2577 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 9 01:00:58.592503 kubelet[2577]: E1009 01:00:58.592469 2577 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 9 01:00:58.593039 kubelet[2577]: I1009 01:00:58.593011 2577 factory.go:221] Registration of the containerd container factory successfully Oct 9 01:00:58.596606 kubelet[2577]: I1009 01:00:58.596564 2577 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 9 01:00:58.599860 kubelet[2577]: I1009 01:00:58.599814 2577 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 9 01:00:58.599860 kubelet[2577]: I1009 01:00:58.599864 2577 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 9 01:00:58.600060 kubelet[2577]: I1009 01:00:58.599884 2577 kubelet.go:2321] "Starting kubelet main sync loop" Oct 9 01:00:58.600060 kubelet[2577]: E1009 01:00:58.599936 2577 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 9 01:00:58.661702 kubelet[2577]: I1009 01:00:58.661276 2577 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 9 01:00:58.661702 kubelet[2577]: I1009 01:00:58.661406 2577 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 9 01:00:58.661702 kubelet[2577]: I1009 01:00:58.661429 2577 state_mem.go:36] "Initialized new in-memory state store" Oct 9 01:00:58.661702 kubelet[2577]: I1009 01:00:58.661607 2577 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 9 01:00:58.661702 kubelet[2577]: I1009 01:00:58.661617 2577 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 9 01:00:58.661702 kubelet[2577]: I1009 01:00:58.661638 2577 policy_none.go:49] "None policy: Start" Oct 9 01:00:58.663067 kubelet[2577]: I1009 01:00:58.663019 2577 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 9 01:00:58.663067 kubelet[2577]: I1009 01:00:58.663068 2577 state_mem.go:35] "Initializing new in-memory state store" Oct 9 01:00:58.663306 kubelet[2577]: I1009 01:00:58.663286 2577 state_mem.go:75] "Updated machine memory state" Oct 9 01:00:58.669019 kubelet[2577]: I1009 01:00:58.668952 2577 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 9 01:00:58.669471 kubelet[2577]: I1009 01:00:58.669175 2577 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 9 01:00:58.669471 kubelet[2577]: I1009 01:00:58.669192 2577 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 9 01:00:58.672642 kubelet[2577]: I1009 01:00:58.670585 2577 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 9 01:00:58.719466 kubelet[2577]: W1009 01:00:58.717568 2577 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 9 01:00:58.737441 kubelet[2577]: W1009 01:00:58.737246 2577 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 9 01:00:58.738834 kubelet[2577]: W1009 01:00:58.738446 2577 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 9 01:00:58.738834 kubelet[2577]: E1009 01:00:58.738507 2577 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4116.0.0-d-2a8a4ec573\" already exists" pod="kube-system/kube-controller-manager-ci-4116.0.0-d-2a8a4ec573" Oct 9 01:00:58.778564 kubelet[2577]: I1009 01:00:58.776997 2577 kubelet_node_status.go:72] "Attempting to register node" node="ci-4116.0.0-d-2a8a4ec573" Oct 9 01:00:58.789996 kubelet[2577]: I1009 01:00:58.789599 2577 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a4faaa7ec49b71c380fbb7489ff04682-kubeconfig\") pod \"kube-controller-manager-ci-4116.0.0-d-2a8a4ec573\" (UID: \"a4faaa7ec49b71c380fbb7489ff04682\") " pod="kube-system/kube-controller-manager-ci-4116.0.0-d-2a8a4ec573" Oct 9 01:00:58.789996 kubelet[2577]: I1009 01:00:58.789647 2577 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7e5556cc87d8184eaa3b3251c3a01738-kubeconfig\") pod \"kube-scheduler-ci-4116.0.0-d-2a8a4ec573\" (UID: \"7e5556cc87d8184eaa3b3251c3a01738\") " pod="kube-system/kube-scheduler-ci-4116.0.0-d-2a8a4ec573" Oct 9 01:00:58.789996 kubelet[2577]: I1009 01:00:58.789675 2577 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3e557e90d72879715115acb3d2a3d14c-ca-certs\") pod \"kube-apiserver-ci-4116.0.0-d-2a8a4ec573\" (UID: \"3e557e90d72879715115acb3d2a3d14c\") " pod="kube-system/kube-apiserver-ci-4116.0.0-d-2a8a4ec573" Oct 9 01:00:58.789996 kubelet[2577]: I1009 01:00:58.789702 2577 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3e557e90d72879715115acb3d2a3d14c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4116.0.0-d-2a8a4ec573\" (UID: \"3e557e90d72879715115acb3d2a3d14c\") " pod="kube-system/kube-apiserver-ci-4116.0.0-d-2a8a4ec573" Oct 9 01:00:58.789996 kubelet[2577]: I1009 01:00:58.789728 2577 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a4faaa7ec49b71c380fbb7489ff04682-ca-certs\") pod \"kube-controller-manager-ci-4116.0.0-d-2a8a4ec573\" (UID: \"a4faaa7ec49b71c380fbb7489ff04682\") " pod="kube-system/kube-controller-manager-ci-4116.0.0-d-2a8a4ec573" Oct 9 01:00:58.790272 kubelet[2577]: I1009 01:00:58.789749 2577 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a4faaa7ec49b71c380fbb7489ff04682-flexvolume-dir\") pod \"kube-controller-manager-ci-4116.0.0-d-2a8a4ec573\" (UID: \"a4faaa7ec49b71c380fbb7489ff04682\") " pod="kube-system/kube-controller-manager-ci-4116.0.0-d-2a8a4ec573" Oct 9 01:00:58.790272 kubelet[2577]: I1009 01:00:58.789789 2577 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3e557e90d72879715115acb3d2a3d14c-k8s-certs\") pod \"kube-apiserver-ci-4116.0.0-d-2a8a4ec573\" (UID: \"3e557e90d72879715115acb3d2a3d14c\") " pod="kube-system/kube-apiserver-ci-4116.0.0-d-2a8a4ec573" Oct 9 01:00:58.790272 kubelet[2577]: I1009 01:00:58.789810 2577 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a4faaa7ec49b71c380fbb7489ff04682-k8s-certs\") pod \"kube-controller-manager-ci-4116.0.0-d-2a8a4ec573\" (UID: \"a4faaa7ec49b71c380fbb7489ff04682\") " pod="kube-system/kube-controller-manager-ci-4116.0.0-d-2a8a4ec573" Oct 9 01:00:58.790272 kubelet[2577]: I1009 01:00:58.789831 2577 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a4faaa7ec49b71c380fbb7489ff04682-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4116.0.0-d-2a8a4ec573\" (UID: \"a4faaa7ec49b71c380fbb7489ff04682\") " pod="kube-system/kube-controller-manager-ci-4116.0.0-d-2a8a4ec573" Oct 9 01:00:58.793458 kubelet[2577]: I1009 01:00:58.792484 2577 kubelet_node_status.go:111] "Node was previously registered" node="ci-4116.0.0-d-2a8a4ec573" Oct 9 01:00:58.793458 kubelet[2577]: I1009 01:00:58.792589 2577 kubelet_node_status.go:75] "Successfully registered node" node="ci-4116.0.0-d-2a8a4ec573" Oct 9 01:00:59.022597 kubelet[2577]: E1009 01:00:59.021644 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 01:00:59.038889 kubelet[2577]: E1009 01:00:59.038807 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 01:00:59.039738 kubelet[2577]: E1009 01:00:59.039700 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 01:00:59.557368 kubelet[2577]: I1009 01:00:59.557330 2577 apiserver.go:52] "Watching apiserver" Oct 9 01:00:59.588747 kubelet[2577]: I1009 01:00:59.588664 2577 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 9 01:00:59.636597 kubelet[2577]: E1009 01:00:59.636246 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 01:00:59.637664 kubelet[2577]: E1009 01:00:59.637629 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 01:00:59.683021 kubelet[2577]: W1009 01:00:59.682989 2577 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 9 01:00:59.684460 kubelet[2577]: E1009 01:00:59.683277 2577 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4116.0.0-d-2a8a4ec573\" already exists" pod="kube-system/kube-apiserver-ci-4116.0.0-d-2a8a4ec573" Oct 9 01:00:59.684919 kubelet[2577]: E1009 01:00:59.684815 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 01:00:59.726363 kubelet[2577]: I1009 01:00:59.726271 2577 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4116.0.0-d-2a8a4ec573" podStartSLOduration=1.726231375 podStartE2EDuration="1.726231375s" podCreationTimestamp="2024-10-09 01:00:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 01:00:59.690674559 +0000 UTC m=+1.224508071" watchObservedRunningTime="2024-10-09 01:00:59.726231375 +0000 UTC m=+1.260064904" Oct 9 01:00:59.726588 kubelet[2577]: I1009 01:00:59.726513 2577 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4116.0.0-d-2a8a4ec573" podStartSLOduration=1.726503192 podStartE2EDuration="1.726503192s" podCreationTimestamp="2024-10-09 01:00:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 01:00:59.72596668 +0000 UTC m=+1.259800224" watchObservedRunningTime="2024-10-09 01:00:59.726503192 +0000 UTC m=+1.260336734" Oct 9 01:00:59.771504 kubelet[2577]: I1009 01:00:59.771332 2577 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4116.0.0-d-2a8a4ec573" podStartSLOduration=2.7713123790000003 podStartE2EDuration="2.771312379s" podCreationTimestamp="2024-10-09 01:00:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 01:00:59.754703072 +0000 UTC m=+1.288536599" watchObservedRunningTime="2024-10-09 01:00:59.771312379 +0000 UTC m=+1.305145907" Oct 9 01:01:00.638638 kubelet[2577]: E1009 01:01:00.638593 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 01:01:00.642111 kubelet[2577]: E1009 01:01:00.640934 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 01:01:02.085270 kubelet[2577]: E1009 01:01:02.085219 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 01:01:02.413936 kubelet[2577]: E1009 01:01:02.413775 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 01:01:02.881961 kubelet[2577]: I1009 01:01:02.881893 2577 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 9 01:01:02.883782 containerd[1474]: time="2024-10-09T01:01:02.882674410Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 9 01:01:02.884281 kubelet[2577]: I1009 01:01:02.882960 2577 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 9 01:01:03.709286 systemd[1]: Created slice kubepods-besteffort-pod8828d03f_1b6c_497c_b9bd_311218182d28.slice - libcontainer container kubepods-besteffort-pod8828d03f_1b6c_497c_b9bd_311218182d28.slice. Oct 9 01:01:03.725991 kubelet[2577]: I1009 01:01:03.725676 2577 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8828d03f-1b6c-497c-b9bd-311218182d28-xtables-lock\") pod \"kube-proxy-lc7nq\" (UID: \"8828d03f-1b6c-497c-b9bd-311218182d28\") " pod="kube-system/kube-proxy-lc7nq" Oct 9 01:01:03.725991 kubelet[2577]: I1009 01:01:03.725740 2577 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8828d03f-1b6c-497c-b9bd-311218182d28-lib-modules\") pod \"kube-proxy-lc7nq\" (UID: \"8828d03f-1b6c-497c-b9bd-311218182d28\") " pod="kube-system/kube-proxy-lc7nq" Oct 9 01:01:03.725991 kubelet[2577]: I1009 01:01:03.725777 2577 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24mqs\" (UniqueName: \"kubernetes.io/projected/8828d03f-1b6c-497c-b9bd-311218182d28-kube-api-access-24mqs\") pod \"kube-proxy-lc7nq\" (UID: \"8828d03f-1b6c-497c-b9bd-311218182d28\") " pod="kube-system/kube-proxy-lc7nq" Oct 9 01:01:03.725991 kubelet[2577]: I1009 01:01:03.725936 2577 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8828d03f-1b6c-497c-b9bd-311218182d28-kube-proxy\") pod \"kube-proxy-lc7nq\" (UID: \"8828d03f-1b6c-497c-b9bd-311218182d28\") " pod="kube-system/kube-proxy-lc7nq" Oct 9 01:01:03.855008 systemd[1]: Created slice kubepods-besteffort-pod738dc206_4df8_4dd9_abbf_6f92587c660f.slice - libcontainer container kubepods-besteffort-pod738dc206_4df8_4dd9_abbf_6f92587c660f.slice. Oct 9 01:01:03.926781 kubelet[2577]: I1009 01:01:03.926617 2577 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/738dc206-4df8-4dd9-abbf-6f92587c660f-var-lib-calico\") pod \"tigera-operator-55748b469f-8t9zb\" (UID: \"738dc206-4df8-4dd9-abbf-6f92587c660f\") " pod="tigera-operator/tigera-operator-55748b469f-8t9zb" Oct 9 01:01:03.927114 kubelet[2577]: I1009 01:01:03.927019 2577 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wg879\" (UniqueName: \"kubernetes.io/projected/738dc206-4df8-4dd9-abbf-6f92587c660f-kube-api-access-wg879\") pod \"tigera-operator-55748b469f-8t9zb\" (UID: \"738dc206-4df8-4dd9-abbf-6f92587c660f\") " pod="tigera-operator/tigera-operator-55748b469f-8t9zb" Oct 9 01:01:04.022335 kubelet[2577]: E1009 01:01:04.021261 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 01:01:04.025621 containerd[1474]: time="2024-10-09T01:01:04.025573706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lc7nq,Uid:8828d03f-1b6c-497c-b9bd-311218182d28,Namespace:kube-system,Attempt:0,}" Oct 9 01:01:04.082677 containerd[1474]: time="2024-10-09T01:01:04.082316240Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:01:04.082677 containerd[1474]: time="2024-10-09T01:01:04.082388126Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:01:04.082677 containerd[1474]: time="2024-10-09T01:01:04.082400272Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:01:04.082677 containerd[1474]: time="2024-10-09T01:01:04.082526673Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:01:04.112081 systemd[1]: Started cri-containerd-c414ce92766766f3d1a31f2ea897a94f9b94ce6888feb3c1b3abe3a8333868aa.scope - libcontainer container c414ce92766766f3d1a31f2ea897a94f9b94ce6888feb3c1b3abe3a8333868aa. Oct 9 01:01:04.149526 containerd[1474]: time="2024-10-09T01:01:04.149399607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lc7nq,Uid:8828d03f-1b6c-497c-b9bd-311218182d28,Namespace:kube-system,Attempt:0,} returns sandbox id \"c414ce92766766f3d1a31f2ea897a94f9b94ce6888feb3c1b3abe3a8333868aa\"" Oct 9 01:01:04.151231 kubelet[2577]: E1009 01:01:04.151176 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 01:01:04.161066 containerd[1474]: time="2024-10-09T01:01:04.160195488Z" level=info msg="CreateContainer within sandbox \"c414ce92766766f3d1a31f2ea897a94f9b94ce6888feb3c1b3abe3a8333868aa\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 9 01:01:04.162160 containerd[1474]: time="2024-10-09T01:01:04.162055697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-55748b469f-8t9zb,Uid:738dc206-4df8-4dd9-abbf-6f92587c660f,Namespace:tigera-operator,Attempt:0,}" Oct 9 01:01:04.202512 containerd[1474]: time="2024-10-09T01:01:04.201585523Z" level=info msg="CreateContainer within sandbox \"c414ce92766766f3d1a31f2ea897a94f9b94ce6888feb3c1b3abe3a8333868aa\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1fc9a5ff4763378d1639bef3dc15f1db22035a9c75e956df1587e54588659529\"" Oct 9 01:01:04.204297 containerd[1474]: time="2024-10-09T01:01:04.204217712Z" level=info msg="StartContainer for \"1fc9a5ff4763378d1639bef3dc15f1db22035a9c75e956df1587e54588659529\"" Oct 9 01:01:04.225132 containerd[1474]: time="2024-10-09T01:01:04.224943253Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:01:04.225132 containerd[1474]: time="2024-10-09T01:01:04.225055562Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:01:04.225132 containerd[1474]: time="2024-10-09T01:01:04.225083607Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:01:04.226173 containerd[1474]: time="2024-10-09T01:01:04.225923966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:01:04.259770 systemd[1]: Started cri-containerd-babeeaaf79f1b603e2307c82532d11fd0777b5c86dae1e183fe99bd67f5ea26d.scope - libcontainer container babeeaaf79f1b603e2307c82532d11fd0777b5c86dae1e183fe99bd67f5ea26d. Oct 9 01:01:04.266791 systemd[1]: Started cri-containerd-1fc9a5ff4763378d1639bef3dc15f1db22035a9c75e956df1587e54588659529.scope - libcontainer container 1fc9a5ff4763378d1639bef3dc15f1db22035a9c75e956df1587e54588659529. Oct 9 01:01:04.331711 containerd[1474]: time="2024-10-09T01:01:04.330904879Z" level=info msg="StartContainer for \"1fc9a5ff4763378d1639bef3dc15f1db22035a9c75e956df1587e54588659529\" returns successfully" Oct 9 01:01:04.354326 containerd[1474]: time="2024-10-09T01:01:04.354036321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-55748b469f-8t9zb,Uid:738dc206-4df8-4dd9-abbf-6f92587c660f,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"babeeaaf79f1b603e2307c82532d11fd0777b5c86dae1e183fe99bd67f5ea26d\"" Oct 9 01:01:04.357726 containerd[1474]: time="2024-10-09T01:01:04.357496943Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\"" Oct 9 01:01:04.659170 kubelet[2577]: E1009 01:01:04.659125 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 01:01:04.754081 sudo[1676]: pam_unix(sudo:session): session closed for user root Oct 9 01:01:04.760319 sshd[1673]: pam_unix(sshd:session): session closed for user core Oct 9 01:01:04.765892 systemd[1]: sshd@8-165.232.149.110:22-139.178.68.195:35700.service: Deactivated successfully. Oct 9 01:01:04.771629 systemd[1]: session-9.scope: Deactivated successfully. Oct 9 01:01:04.771986 systemd[1]: session-9.scope: Consumed 4.999s CPU time, 147.2M memory peak, 0B memory swap peak. Oct 9 01:01:04.773138 systemd-logind[1456]: Session 9 logged out. Waiting for processes to exit. Oct 9 01:01:04.776827 systemd-logind[1456]: Removed session 9. Oct 9 01:01:07.328177 update_engine[1457]: I20241009 01:01:07.328003 1457 update_attempter.cc:509] Updating boot flags... Oct 9 01:01:07.367592 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2908) Oct 9 01:01:07.443565 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2907) Oct 9 01:01:07.507596 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2907) Oct 9 01:01:09.625645 kubelet[2577]: E1009 01:01:09.625595 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 01:01:09.638411 kubelet[2577]: I1009 01:01:09.638341 2577 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-lc7nq" podStartSLOduration=6.638319751 podStartE2EDuration="6.638319751s" podCreationTimestamp="2024-10-09 01:01:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 01:01:04.680823352 +0000 UTC m=+6.214656879" watchObservedRunningTime="2024-10-09 01:01:09.638319751 +0000 UTC m=+11.172153280" Oct 9 01:01:12.083404 kubelet[2577]: E1009 01:01:12.081925 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 01:01:12.417601 kubelet[2577]: E1009 01:01:12.417548 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 01:01:12.661026 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1151885947.mount: Deactivated successfully. Oct 9 01:01:13.203586 containerd[1474]: time="2024-10-09T01:01:13.203512069Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:01:13.205224 containerd[1474]: time="2024-10-09T01:01:13.204798101Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.3: active requests=0, bytes read=22136557" Oct 9 01:01:13.207476 containerd[1474]: time="2024-10-09T01:01:13.206198405Z" level=info msg="ImageCreate event name:\"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:01:13.209626 containerd[1474]: time="2024-10-09T01:01:13.209572476Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:01:13.211065 containerd[1474]: time="2024-10-09T01:01:13.211014247Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.3\" with image id \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\", repo tag \"quay.io/tigera/operator:v1.34.3\", repo digest \"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\", size \"22130728\" in 8.853435523s" Oct 9 01:01:13.211065 containerd[1474]: time="2024-10-09T01:01:13.211056603Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\" returns image reference \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\"" Oct 9 01:01:13.218747 containerd[1474]: time="2024-10-09T01:01:13.218703688Z" level=info msg="CreateContainer within sandbox \"babeeaaf79f1b603e2307c82532d11fd0777b5c86dae1e183fe99bd67f5ea26d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 9 01:01:13.243380 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1810939105.mount: Deactivated successfully. Oct 9 01:01:13.244970 containerd[1474]: time="2024-10-09T01:01:13.244694613Z" level=info msg="CreateContainer within sandbox \"babeeaaf79f1b603e2307c82532d11fd0777b5c86dae1e183fe99bd67f5ea26d\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"9f0a82885ab7a87155729becbff21d88f5c0041e717534aa64212e00ee4e968b\"" Oct 9 01:01:13.247110 containerd[1474]: time="2024-10-09T01:01:13.245603961Z" level=info msg="StartContainer for \"9f0a82885ab7a87155729becbff21d88f5c0041e717534aa64212e00ee4e968b\"" Oct 9 01:01:13.286695 systemd[1]: Started cri-containerd-9f0a82885ab7a87155729becbff21d88f5c0041e717534aa64212e00ee4e968b.scope - libcontainer container 9f0a82885ab7a87155729becbff21d88f5c0041e717534aa64212e00ee4e968b. Oct 9 01:01:13.329811 containerd[1474]: time="2024-10-09T01:01:13.329647376Z" level=info msg="StartContainer for \"9f0a82885ab7a87155729becbff21d88f5c0041e717534aa64212e00ee4e968b\" returns successfully" Oct 9 01:01:16.774402 kubelet[2577]: I1009 01:01:16.774223 2577 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-55748b469f-8t9zb" podStartSLOduration=4.915328492 podStartE2EDuration="13.77419989s" podCreationTimestamp="2024-10-09 01:01:03 +0000 UTC" firstStartedPulling="2024-10-09 01:01:04.356213656 +0000 UTC m=+5.890047171" lastFinishedPulling="2024-10-09 01:01:13.215085044 +0000 UTC m=+14.748918569" observedRunningTime="2024-10-09 01:01:13.696543242 +0000 UTC m=+15.230376769" watchObservedRunningTime="2024-10-09 01:01:16.77419989 +0000 UTC m=+18.308033418" Oct 9 01:01:16.786355 systemd[1]: Created slice kubepods-besteffort-pod9ba05822_9ad4_43c5_a1c9_9ab376107e1b.slice - libcontainer container kubepods-besteffort-pod9ba05822_9ad4_43c5_a1c9_9ab376107e1b.slice. Oct 9 01:01:16.916097 kubelet[2577]: I1009 01:01:16.915426 2577 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/9ba05822-9ad4-43c5-a1c9-9ab376107e1b-typha-certs\") pod \"calico-typha-5bc56654c7-7d7s5\" (UID: \"9ba05822-9ad4-43c5-a1c9-9ab376107e1b\") " pod="calico-system/calico-typha-5bc56654c7-7d7s5" Oct 9 01:01:16.916097 kubelet[2577]: I1009 01:01:16.915582 2577 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3d2e3f5d-2bd4-46cb-97e5-c9bd8812c321-xtables-lock\") pod \"calico-node-xtblv\" (UID: \"3d2e3f5d-2bd4-46cb-97e5-c9bd8812c321\") " pod="calico-system/calico-node-xtblv" Oct 9 01:01:16.916097 kubelet[2577]: I1009 01:01:16.915611 2577 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/3d2e3f5d-2bd4-46cb-97e5-c9bd8812c321-node-certs\") pod \"calico-node-xtblv\" (UID: \"3d2e3f5d-2bd4-46cb-97e5-c9bd8812c321\") " pod="calico-system/calico-node-xtblv" Oct 9 01:01:16.916097 kubelet[2577]: I1009 01:01:16.915626 2577 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3d2e3f5d-2bd4-46cb-97e5-c9bd8812c321-var-lib-calico\") pod \"calico-node-xtblv\" (UID: \"3d2e3f5d-2bd4-46cb-97e5-c9bd8812c321\") " pod="calico-system/calico-node-xtblv" Oct 9 01:01:16.916097 kubelet[2577]: I1009 01:01:16.915647 2577 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2lvl\" (UniqueName: \"kubernetes.io/projected/3d2e3f5d-2bd4-46cb-97e5-c9bd8812c321-kube-api-access-v2lvl\") pod \"calico-node-xtblv\" (UID: \"3d2e3f5d-2bd4-46cb-97e5-c9bd8812c321\") " pod="calico-system/calico-node-xtblv" Oct 9 01:01:16.916658 kubelet[2577]: I1009 01:01:16.915668 2577 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d2e3f5d-2bd4-46cb-97e5-c9bd8812c321-tigera-ca-bundle\") pod \"calico-node-xtblv\" (UID: \"3d2e3f5d-2bd4-46cb-97e5-c9bd8812c321\") " pod="calico-system/calico-node-xtblv" Oct 9 01:01:16.916658 kubelet[2577]: I1009 01:01:16.915843 2577 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/3d2e3f5d-2bd4-46cb-97e5-c9bd8812c321-cni-net-dir\") pod \"calico-node-xtblv\" (UID: \"3d2e3f5d-2bd4-46cb-97e5-c9bd8812c321\") " pod="calico-system/calico-node-xtblv" Oct 9 01:01:16.916658 kubelet[2577]: I1009 01:01:16.915859 2577 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/3d2e3f5d-2bd4-46cb-97e5-c9bd8812c321-flexvol-driver-host\") pod \"calico-node-xtblv\" (UID: \"3d2e3f5d-2bd4-46cb-97e5-c9bd8812c321\") " pod="calico-system/calico-node-xtblv" Oct 9 01:01:16.916658 kubelet[2577]: I1009 01:01:16.915876 2577 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/3d2e3f5d-2bd4-46cb-97e5-c9bd8812c321-cni-log-dir\") pod \"calico-node-xtblv\" (UID: \"3d2e3f5d-2bd4-46cb-97e5-c9bd8812c321\") " pod="calico-system/calico-node-xtblv" Oct 9 01:01:16.916658 kubelet[2577]: I1009 01:01:16.916189 2577 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9ba05822-9ad4-43c5-a1c9-9ab376107e1b-tigera-ca-bundle\") pod \"calico-typha-5bc56654c7-7d7s5\" (UID: \"9ba05822-9ad4-43c5-a1c9-9ab376107e1b\") " pod="calico-system/calico-typha-5bc56654c7-7d7s5" Oct 9 01:01:16.916875 kubelet[2577]: I1009 01:01:16.916208 2577 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9sqpk\" (UniqueName: \"kubernetes.io/projected/9ba05822-9ad4-43c5-a1c9-9ab376107e1b-kube-api-access-9sqpk\") pod \"calico-typha-5bc56654c7-7d7s5\" (UID: \"9ba05822-9ad4-43c5-a1c9-9ab376107e1b\") " pod="calico-system/calico-typha-5bc56654c7-7d7s5" Oct 9 01:01:16.919478 kubelet[2577]: I1009 01:01:16.916229 2577 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/3d2e3f5d-2bd4-46cb-97e5-c9bd8812c321-cni-bin-dir\") pod \"calico-node-xtblv\" (UID: \"3d2e3f5d-2bd4-46cb-97e5-c9bd8812c321\") " pod="calico-system/calico-node-xtblv" Oct 9 01:01:16.919478 kubelet[2577]: I1009 01:01:16.917343 2577 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3d2e3f5d-2bd4-46cb-97e5-c9bd8812c321-lib-modules\") pod \"calico-node-xtblv\" (UID: \"3d2e3f5d-2bd4-46cb-97e5-c9bd8812c321\") " pod="calico-system/calico-node-xtblv" Oct 9 01:01:16.919478 kubelet[2577]: I1009 01:01:16.917637 2577 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/3d2e3f5d-2bd4-46cb-97e5-c9bd8812c321-var-run-calico\") pod \"calico-node-xtblv\" (UID: \"3d2e3f5d-2bd4-46cb-97e5-c9bd8812c321\") " pod="calico-system/calico-node-xtblv" Oct 9 01:01:16.919478 kubelet[2577]: I1009 01:01:16.917658 2577 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/3d2e3f5d-2bd4-46cb-97e5-c9bd8812c321-policysync\") pod \"calico-node-xtblv\" (UID: \"3d2e3f5d-2bd4-46cb-97e5-c9bd8812c321\") " pod="calico-system/calico-node-xtblv" Oct 9 01:01:16.930919 systemd[1]: Created slice kubepods-besteffort-pod3d2e3f5d_2bd4_46cb_97e5_c9bd8812c321.slice - libcontainer container kubepods-besteffort-pod3d2e3f5d_2bd4_46cb_97e5_c9bd8812c321.slice. Oct 9 01:01:17.035335 kubelet[2577]: E1009 01:01:17.034566 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:17.035335 kubelet[2577]: W1009 01:01:17.035115 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:17.039955 kubelet[2577]: E1009 01:01:17.037832 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:17.043250 kubelet[2577]: E1009 01:01:17.042841 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:17.043250 kubelet[2577]: W1009 01:01:17.042881 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:17.043250 kubelet[2577]: E1009 01:01:17.042919 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:17.044625 kubelet[2577]: E1009 01:01:17.044198 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:17.044625 kubelet[2577]: W1009 01:01:17.044239 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:17.044625 kubelet[2577]: E1009 01:01:17.044562 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:17.051325 kubelet[2577]: E1009 01:01:17.051046 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:17.051325 kubelet[2577]: W1009 01:01:17.051135 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:17.051325 kubelet[2577]: E1009 01:01:17.051258 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:17.060835 kubelet[2577]: E1009 01:01:17.060775 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:17.062582 kubelet[2577]: W1009 01:01:17.062518 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:17.063009 kubelet[2577]: E1009 01:01:17.062951 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:17.067530 kubelet[2577]: E1009 01:01:17.067392 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:17.067530 kubelet[2577]: W1009 01:01:17.067477 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:17.068131 kubelet[2577]: E1009 01:01:17.067898 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:17.070615 kubelet[2577]: E1009 01:01:17.070465 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:17.070615 kubelet[2577]: W1009 01:01:17.070506 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:17.071105 kubelet[2577]: E1009 01:01:17.070812 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:17.078069 kubelet[2577]: E1009 01:01:17.076834 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:17.078069 kubelet[2577]: W1009 01:01:17.076872 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:17.078687 kubelet[2577]: E1009 01:01:17.078484 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:17.078687 kubelet[2577]: W1009 01:01:17.078517 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:17.079312 kubelet[2577]: E1009 01:01:17.079234 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:17.079312 kubelet[2577]: W1009 01:01:17.079258 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:17.082478 kubelet[2577]: E1009 01:01:17.082384 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:17.082929 kubelet[2577]: E1009 01:01:17.082789 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:17.082929 kubelet[2577]: E1009 01:01:17.082827 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:17.082929 kubelet[2577]: E1009 01:01:17.082896 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p8pzw" podUID="e8f89e29-cfb4-4b5e-a7cb-66ed0f9162bb" Oct 9 01:01:17.083251 kubelet[2577]: E1009 01:01:17.082068 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:17.083251 kubelet[2577]: W1009 01:01:17.083106 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:17.084252 kubelet[2577]: E1009 01:01:17.084075 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:17.084252 kubelet[2577]: W1009 01:01:17.084093 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:17.084252 kubelet[2577]: E1009 01:01:17.084116 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:17.084589 kubelet[2577]: E1009 01:01:17.084570 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:17.084820 kubelet[2577]: E1009 01:01:17.084732 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:17.084820 kubelet[2577]: W1009 01:01:17.084776 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:17.084820 kubelet[2577]: E1009 01:01:17.084788 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:17.085647 kubelet[2577]: E1009 01:01:17.085474 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:17.085647 kubelet[2577]: W1009 01:01:17.085491 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:17.085647 kubelet[2577]: E1009 01:01:17.085505 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:17.087204 kubelet[2577]: E1009 01:01:17.085886 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:17.087204 kubelet[2577]: W1009 01:01:17.085900 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:17.087204 kubelet[2577]: E1009 01:01:17.085913 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:17.088462 kubelet[2577]: E1009 01:01:17.088416 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:17.088695 kubelet[2577]: W1009 01:01:17.088620 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:17.088695 kubelet[2577]: E1009 01:01:17.088658 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:17.096514 kubelet[2577]: E1009 01:01:17.096030 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 01:01:17.096791 containerd[1474]: time="2024-10-09T01:01:17.096709768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5bc56654c7-7d7s5,Uid:9ba05822-9ad4-43c5-a1c9-9ab376107e1b,Namespace:calico-system,Attempt:0,}" Oct 9 01:01:17.110793 kubelet[2577]: E1009 01:01:17.110697 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:17.110793 kubelet[2577]: W1009 01:01:17.110724 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:17.110793 kubelet[2577]: E1009 01:01:17.110745 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:17.117807 kubelet[2577]: E1009 01:01:17.117522 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:17.117807 kubelet[2577]: W1009 01:01:17.117547 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:17.117807 kubelet[2577]: E1009 01:01:17.117571 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:17.119027 kubelet[2577]: E1009 01:01:17.118855 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:17.119027 kubelet[2577]: W1009 01:01:17.118878 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:17.119027 kubelet[2577]: E1009 01:01:17.118901 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:17.120246 kubelet[2577]: E1009 01:01:17.120099 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:17.120246 kubelet[2577]: W1009 01:01:17.120125 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:17.120246 kubelet[2577]: E1009 01:01:17.120162 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:17.123234 kubelet[2577]: E1009 01:01:17.123207 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:17.124102 kubelet[2577]: W1009 01:01:17.124024 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:17.124102 kubelet[2577]: E1009 01:01:17.124059 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:17.127475 kubelet[2577]: E1009 01:01:17.127323 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:17.127475 kubelet[2577]: W1009 01:01:17.127430 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:17.128644 kubelet[2577]: E1009 01:01:17.128487 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:17.128644 kubelet[2577]: I1009 01:01:17.128548 2577 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/e8f89e29-cfb4-4b5e-a7cb-66ed0f9162bb-varrun\") pod \"csi-node-driver-p8pzw\" (UID: \"e8f89e29-cfb4-4b5e-a7cb-66ed0f9162bb\") " pod="calico-system/csi-node-driver-p8pzw" Oct 9 01:01:17.130300 kubelet[2577]: E1009 01:01:17.130101 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:17.130300 kubelet[2577]: W1009 01:01:17.130135 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:17.130300 kubelet[2577]: E1009 01:01:17.130165 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:17.130856 kubelet[2577]: E1009 01:01:17.130841 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:17.131018 kubelet[2577]: W1009 01:01:17.130893 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:17.131018 kubelet[2577]: E1009 01:01:17.130915 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:17.131487 kubelet[2577]: E1009 01:01:17.131398 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:17.131487 kubelet[2577]: W1009 01:01:17.131416 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:17.131774 kubelet[2577]: E1009 01:01:17.131756 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:17.132066 kubelet[2577]: E1009 01:01:17.132024 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:17.132066 kubelet[2577]: W1009 01:01:17.132039 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:17.132379 kubelet[2577]: E1009 01:01:17.132273 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:17.132519 kubelet[2577]: E1009 01:01:17.132509 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:17.132596 kubelet[2577]: W1009 01:01:17.132573 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:17.133063 kubelet[2577]: E1009 01:01:17.132641 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:17.133566 kubelet[2577]: E1009 01:01:17.133469 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:17.133566 kubelet[2577]: W1009 01:01:17.133484 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:17.133566 kubelet[2577]: E1009 01:01:17.133514 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:17.134029 kubelet[2577]: E1009 01:01:17.133934 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:17.134029 kubelet[2577]: W1009 01:01:17.133947 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:17.134029 kubelet[2577]: E1009 01:01:17.133959 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:17.134364 kubelet[2577]: E1009 01:01:17.134326 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:17.134552 kubelet[2577]: W1009 01:01:17.134534 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:17.134715 kubelet[2577]: E1009 01:01:17.134702 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:17.135353 kubelet[2577]: E1009 01:01:17.135339 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:17.135528 kubelet[2577]: W1009 01:01:17.135410 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:17.135528 kubelet[2577]: E1009 01:01:17.135425 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:17.136452 kubelet[2577]: E1009 01:01:17.136013 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:17.136452 kubelet[2577]: W1009 01:01:17.136026 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:17.136452 kubelet[2577]: E1009 01:01:17.136037 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:17.136901 kubelet[2577]: E1009 01:01:17.136887 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:17.137088 kubelet[2577]: W1009 01:01:17.136988 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:17.137088 kubelet[2577]: E1009 01:01:17.137006 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:17.137706 kubelet[2577]: E1009 01:01:17.137552 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:17.137706 kubelet[2577]: W1009 01:01:17.137563 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:17.138474 kubelet[2577]: E1009 01:01:17.138343 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:17.139102 kubelet[2577]: E1009 01:01:17.138843 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:17.139102 kubelet[2577]: W1009 01:01:17.138857 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:17.139102 kubelet[2577]: E1009 01:01:17.138875 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:17.142005 kubelet[2577]: E1009 01:01:17.139716 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:17.142005 kubelet[2577]: W1009 01:01:17.139730 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:17.142005 kubelet[2577]: E1009 01:01:17.139751 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:17.144530 kubelet[2577]: E1009 01:01:17.143670 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:17.144530 kubelet[2577]: W1009 01:01:17.143695 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:17.144530 kubelet[2577]: E1009 01:01:17.143720 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:17.147038 kubelet[2577]: E1009 01:01:17.145494 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:17.147038 kubelet[2577]: W1009 01:01:17.145521 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:17.147038 kubelet[2577]: E1009 01:01:17.145546 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:17.148031 kubelet[2577]: E1009 01:01:17.147472 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:17.148031 kubelet[2577]: W1009 01:01:17.147494 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:17.148031 kubelet[2577]: E1009 01:01:17.147534 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:17.150643 kubelet[2577]: E1009 01:01:17.150475 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:17.150643 kubelet[2577]: W1009 01:01:17.150517 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:17.150643 kubelet[2577]: E1009 01:01:17.150544 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:17.164222 containerd[1474]: time="2024-10-09T01:01:17.163306969Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:01:17.164222 containerd[1474]: time="2024-10-09T01:01:17.163391353Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:01:17.164222 containerd[1474]: time="2024-10-09T01:01:17.163404225Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:01:17.164222 containerd[1474]: time="2024-10-09T01:01:17.163515428Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:01:17.210248 systemd[1]: Started cri-containerd-98b41e0e341b771416fd3589fed5bf75719a5f201cd90c6540444e0ea0ee5766.scope - libcontainer container 98b41e0e341b771416fd3589fed5bf75719a5f201cd90c6540444e0ea0ee5766. Oct 9 01:01:17.229414 kubelet[2577]: E1009 01:01:17.229260 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:17.229414 kubelet[2577]: W1009 01:01:17.229285 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:17.229414 kubelet[2577]: E1009 01:01:17.229334 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:17.229414 kubelet[2577]: I1009 01:01:17.229390 2577 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e8f89e29-cfb4-4b5e-a7cb-66ed0f9162bb-kubelet-dir\") pod \"csi-node-driver-p8pzw\" (UID: \"e8f89e29-cfb4-4b5e-a7cb-66ed0f9162bb\") " pod="calico-system/csi-node-driver-p8pzw" Oct 9 01:01:17.232969 kubelet[2577]: E1009 01:01:17.232678 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:17.232969 kubelet[2577]: W1009 01:01:17.232740 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:17.232969 kubelet[2577]: E1009 01:01:17.232772 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:17.233974 kubelet[2577]: E1009 01:01:17.233356 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:17.233974 kubelet[2577]: W1009 01:01:17.233509 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:17.237151 kubelet[2577]: E1009 01:01:17.234128 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:17.238528 kubelet[2577]: E1009 01:01:17.237844 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 01:01:17.238871 containerd[1474]: time="2024-10-09T01:01:17.238843156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xtblv,Uid:3d2e3f5d-2bd4-46cb-97e5-c9bd8812c321,Namespace:calico-system,Attempt:0,}" Oct 9 01:01:17.239198 kubelet[2577]: E1009 01:01:17.239055 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:17.240109 kubelet[2577]: W1009 01:01:17.239352 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:17.240109 kubelet[2577]: E1009 01:01:17.239383 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:17.240109 kubelet[2577]: I1009 01:01:17.239441 2577 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e8f89e29-cfb4-4b5e-a7cb-66ed0f9162bb-registration-dir\") pod \"csi-node-driver-p8pzw\" (UID: \"e8f89e29-cfb4-4b5e-a7cb-66ed0f9162bb\") " pod="calico-system/csi-node-driver-p8pzw" Oct 9 01:01:17.241920 kubelet[2577]: E1009 01:01:17.241795 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:17.241920 kubelet[2577]: W1009 01:01:17.241826 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:17.241920 kubelet[2577]: E1009 01:01:17.241867 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:17.244799 kubelet[2577]: E1009 01:01:17.244743 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:17.245112 kubelet[2577]: W1009 01:01:17.244984 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:17.245112 kubelet[2577]: E1009 01:01:17.245038 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:17.246519 kubelet[2577]: E1009 01:01:17.245697 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:17.246519 kubelet[2577]: W1009 01:01:17.245717 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:17.246899 kubelet[2577]: E1009 01:01:17.246783 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:17.247523 kubelet[2577]: E1009 01:01:17.247473 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:17.247523 kubelet[2577]: W1009 01:01:17.247487 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:17.247523 kubelet[2577]: E1009 01:01:17.247504 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:17.247940 kubelet[2577]: I1009 01:01:17.247801 2577 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e8f89e29-cfb4-4b5e-a7cb-66ed0f9162bb-socket-dir\") pod \"csi-node-driver-p8pzw\" (UID: \"e8f89e29-cfb4-4b5e-a7cb-66ed0f9162bb\") " pod="calico-system/csi-node-driver-p8pzw" Oct 9 01:01:17.250409 kubelet[2577]: E1009 01:01:17.248868 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:17.250409 kubelet[2577]: W1009 01:01:17.248988 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:17.251114 kubelet[2577]: E1009 01:01:17.251093 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:17.254516 kubelet[2577]: I1009 01:01:17.254369 2577 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gv5h2\" (UniqueName: \"kubernetes.io/projected/e8f89e29-cfb4-4b5e-a7cb-66ed0f9162bb-kube-api-access-gv5h2\") pod \"csi-node-driver-p8pzw\" (UID: \"e8f89e29-cfb4-4b5e-a7cb-66ed0f9162bb\") " pod="calico-system/csi-node-driver-p8pzw" Oct 9 01:01:17.256291 kubelet[2577]: E1009 01:01:17.255418 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:17.256291 kubelet[2577]: W1009 01:01:17.255462 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:17.257552 kubelet[2577]: E1009 01:01:17.256513 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:17.258064 kubelet[2577]: E1009 01:01:17.257790 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:17.258064 kubelet[2577]: W1009 01:01:17.257897 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:17.258341 kubelet[2577]: E1009 01:01:17.258161 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:17.259970 kubelet[2577]: E1009 01:01:17.259717 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:17.259970 kubelet[2577]: W1009 01:01:17.259749 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:17.261632 kubelet[2577]: E1009 01:01:17.260233 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:17.261873 kubelet[2577]: E1009 01:01:17.261758 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:17.261873 kubelet[2577]: W1009 01:01:17.261795 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:17.262184 kubelet[2577]: E1009 01:01:17.262016 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:17.263945 kubelet[2577]: E1009 01:01:17.263502 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:17.263945 kubelet[2577]: W1009 01:01:17.263563 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:17.263945 kubelet[2577]: E1009 01:01:17.263821 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:17.270900 kubelet[2577]: E1009 01:01:17.265751 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:17.270900 kubelet[2577]: W1009 01:01:17.265774 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:17.270900 kubelet[2577]: E1009 01:01:17.265804 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:17.270900 kubelet[2577]: E1009 01:01:17.266154 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:17.270900 kubelet[2577]: W1009 01:01:17.266187 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:17.270900 kubelet[2577]: E1009 01:01:17.266215 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:17.270900 kubelet[2577]: E1009 01:01:17.267129 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:17.270900 kubelet[2577]: W1009 01:01:17.267144 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:17.270900 kubelet[2577]: E1009 01:01:17.267159 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:17.322750 containerd[1474]: time="2024-10-09T01:01:17.322563332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5bc56654c7-7d7s5,Uid:9ba05822-9ad4-43c5-a1c9-9ab376107e1b,Namespace:calico-system,Attempt:0,} returns sandbox id \"98b41e0e341b771416fd3589fed5bf75719a5f201cd90c6540444e0ea0ee5766\"" Oct 9 01:01:17.329351 kubelet[2577]: E1009 01:01:17.328874 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 01:01:17.335175 containerd[1474]: time="2024-10-09T01:01:17.335120975Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\"" Oct 9 01:01:17.357961 containerd[1474]: time="2024-10-09T01:01:17.357672164Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:01:17.357961 containerd[1474]: time="2024-10-09T01:01:17.357852671Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:01:17.357961 containerd[1474]: time="2024-10-09T01:01:17.357935369Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:01:17.360741 containerd[1474]: time="2024-10-09T01:01:17.360538279Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:01:17.362089 kubelet[2577]: E1009 01:01:17.361880 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:17.362089 kubelet[2577]: W1009 01:01:17.361915 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:17.362089 kubelet[2577]: E1009 01:01:17.361962 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:17.367470 kubelet[2577]: E1009 01:01:17.365869 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:17.367470 kubelet[2577]: W1009 01:01:17.366663 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:17.367470 kubelet[2577]: E1009 01:01:17.367096 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:17.370170 kubelet[2577]: E1009 01:01:17.370127 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:17.370170 kubelet[2577]: W1009 01:01:17.370176 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:17.370380 kubelet[2577]: E1009 01:01:17.370259 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:17.372209 kubelet[2577]: E1009 01:01:17.372118 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:17.372707 kubelet[2577]: W1009 01:01:17.372520 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:17.372707 kubelet[2577]: E1009 01:01:17.372676 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:17.374771 kubelet[2577]: E1009 01:01:17.374253 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:17.374771 kubelet[2577]: W1009 01:01:17.374583 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:17.375428 kubelet[2577]: E1009 01:01:17.375229 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:17.375903 kubelet[2577]: E1009 01:01:17.375667 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:17.375903 kubelet[2577]: W1009 01:01:17.375689 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:17.375903 kubelet[2577]: E1009 01:01:17.375837 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:17.379553 kubelet[2577]: E1009 01:01:17.377762 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:17.379553 kubelet[2577]: W1009 01:01:17.377963 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:17.379553 kubelet[2577]: E1009 01:01:17.378347 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:17.379553 kubelet[2577]: E1009 01:01:17.379044 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:17.379553 kubelet[2577]: W1009 01:01:17.379513 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:17.380037 kubelet[2577]: E1009 01:01:17.379838 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:17.381184 kubelet[2577]: E1009 01:01:17.381148 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:17.381314 kubelet[2577]: W1009 01:01:17.381221 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:17.381394 kubelet[2577]: E1009 01:01:17.381373 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:17.382419 kubelet[2577]: E1009 01:01:17.381748 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:17.382419 kubelet[2577]: W1009 01:01:17.381793 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:17.382419 kubelet[2577]: E1009 01:01:17.381885 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:17.382419 kubelet[2577]: E1009 01:01:17.382256 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:17.382419 kubelet[2577]: W1009 01:01:17.382304 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:17.382419 kubelet[2577]: E1009 01:01:17.382392 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:17.383951 kubelet[2577]: E1009 01:01:17.383181 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:17.383951 kubelet[2577]: W1009 01:01:17.383473 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:17.383951 kubelet[2577]: E1009 01:01:17.383915 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:17.384300 kubelet[2577]: E1009 01:01:17.384264 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:17.384300 kubelet[2577]: W1009 01:01:17.384281 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:17.384485 kubelet[2577]: E1009 01:01:17.384467 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:17.384716 kubelet[2577]: E1009 01:01:17.384700 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:17.384716 kubelet[2577]: W1009 01:01:17.384716 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:17.384807 kubelet[2577]: E1009 01:01:17.384776 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:17.385590 kubelet[2577]: E1009 01:01:17.385073 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:17.385590 kubelet[2577]: W1009 01:01:17.385105 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:17.385590 kubelet[2577]: E1009 01:01:17.385516 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:17.386380 kubelet[2577]: E1009 01:01:17.385982 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:17.386380 kubelet[2577]: W1009 01:01:17.386001 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:17.386624 kubelet[2577]: E1009 01:01:17.386482 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:17.388082 kubelet[2577]: E1009 01:01:17.386840 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:17.388082 kubelet[2577]: W1009 01:01:17.386859 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:17.388082 kubelet[2577]: E1009 01:01:17.386921 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:17.388082 kubelet[2577]: E1009 01:01:17.387286 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:17.388082 kubelet[2577]: W1009 01:01:17.387299 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:17.388082 kubelet[2577]: E1009 01:01:17.387343 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:17.388082 kubelet[2577]: E1009 01:01:17.387829 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:17.388082 kubelet[2577]: W1009 01:01:17.387845 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:17.388082 kubelet[2577]: E1009 01:01:17.387872 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:17.391182 kubelet[2577]: E1009 01:01:17.388391 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:17.391182 kubelet[2577]: W1009 01:01:17.388430 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:17.391182 kubelet[2577]: E1009 01:01:17.388474 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:17.394840 systemd[1]: Started cri-containerd-0c470cdda0053f9bef9217c11f754eaabff7d13a1e0219f76dc7020e319feb11.scope - libcontainer container 0c470cdda0053f9bef9217c11f754eaabff7d13a1e0219f76dc7020e319feb11. Oct 9 01:01:17.418511 kubelet[2577]: E1009 01:01:17.417359 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:17.418511 kubelet[2577]: W1009 01:01:17.417566 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:17.418511 kubelet[2577]: E1009 01:01:17.417603 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:17.443086 containerd[1474]: time="2024-10-09T01:01:17.442998971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xtblv,Uid:3d2e3f5d-2bd4-46cb-97e5-c9bd8812c321,Namespace:calico-system,Attempt:0,} returns sandbox id \"0c470cdda0053f9bef9217c11f754eaabff7d13a1e0219f76dc7020e319feb11\"" Oct 9 01:01:17.449212 kubelet[2577]: E1009 01:01:17.447514 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 01:01:18.602486 kubelet[2577]: E1009 01:01:18.600912 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p8pzw" podUID="e8f89e29-cfb4-4b5e-a7cb-66ed0f9162bb" Oct 9 01:01:19.929848 containerd[1474]: time="2024-10-09T01:01:19.929790718Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:01:19.931123 containerd[1474]: time="2024-10-09T01:01:19.931042200Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.1: active requests=0, bytes read=29471335" Oct 9 01:01:19.935979 containerd[1474]: time="2024-10-09T01:01:19.935233822Z" level=info msg="ImageCreate event name:\"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:01:19.984991 containerd[1474]: time="2024-10-09T01:01:19.984062303Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:01:19.986677 containerd[1474]: time="2024-10-09T01:01:19.984963288Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.1\" with image id \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\", size \"30963728\" in 2.64835221s" Oct 9 01:01:19.986920 containerd[1474]: time="2024-10-09T01:01:19.986857756Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\" returns image reference \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\"" Oct 9 01:01:19.993062 containerd[1474]: time="2024-10-09T01:01:19.993016760Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\"" Oct 9 01:01:20.035869 containerd[1474]: time="2024-10-09T01:01:20.035643071Z" level=info msg="CreateContainer within sandbox \"98b41e0e341b771416fd3589fed5bf75719a5f201cd90c6540444e0ea0ee5766\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 9 01:01:20.110021 containerd[1474]: time="2024-10-09T01:01:20.109968732Z" level=info msg="CreateContainer within sandbox \"98b41e0e341b771416fd3589fed5bf75719a5f201cd90c6540444e0ea0ee5766\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"d4fb8bd1ef8e83c938acf480af8a1d1341712f9c7c6a790b3c89838623e09826\"" Oct 9 01:01:20.111328 containerd[1474]: time="2024-10-09T01:01:20.111273117Z" level=info msg="StartContainer for \"d4fb8bd1ef8e83c938acf480af8a1d1341712f9c7c6a790b3c89838623e09826\"" Oct 9 01:01:20.196683 systemd[1]: Started cri-containerd-d4fb8bd1ef8e83c938acf480af8a1d1341712f9c7c6a790b3c89838623e09826.scope - libcontainer container d4fb8bd1ef8e83c938acf480af8a1d1341712f9c7c6a790b3c89838623e09826. Oct 9 01:01:20.297270 containerd[1474]: time="2024-10-09T01:01:20.297206688Z" level=info msg="StartContainer for \"d4fb8bd1ef8e83c938acf480af8a1d1341712f9c7c6a790b3c89838623e09826\" returns successfully" Oct 9 01:01:20.605755 kubelet[2577]: E1009 01:01:20.605682 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p8pzw" podUID="e8f89e29-cfb4-4b5e-a7cb-66ed0f9162bb" Oct 9 01:01:20.731723 kubelet[2577]: E1009 01:01:20.731002 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 01:01:20.792929 kubelet[2577]: E1009 01:01:20.792856 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:20.792929 kubelet[2577]: W1009 01:01:20.792920 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:20.793270 kubelet[2577]: E1009 01:01:20.792956 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:20.793417 kubelet[2577]: E1009 01:01:20.793366 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:20.793417 kubelet[2577]: W1009 01:01:20.793396 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:20.793575 kubelet[2577]: E1009 01:01:20.793470 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:20.793766 kubelet[2577]: E1009 01:01:20.793747 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:20.793766 kubelet[2577]: W1009 01:01:20.793763 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:20.793852 kubelet[2577]: E1009 01:01:20.793777 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:20.794037 kubelet[2577]: E1009 01:01:20.794020 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:20.794079 kubelet[2577]: W1009 01:01:20.794038 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:20.794079 kubelet[2577]: E1009 01:01:20.794052 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:20.794318 kubelet[2577]: E1009 01:01:20.794302 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:20.794318 kubelet[2577]: W1009 01:01:20.794315 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:20.794509 kubelet[2577]: E1009 01:01:20.794329 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:20.794762 kubelet[2577]: E1009 01:01:20.794743 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:20.794762 kubelet[2577]: W1009 01:01:20.794762 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:20.794843 kubelet[2577]: E1009 01:01:20.794778 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:20.795046 kubelet[2577]: E1009 01:01:20.795029 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:20.795090 kubelet[2577]: W1009 01:01:20.795047 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:20.795090 kubelet[2577]: E1009 01:01:20.795061 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:20.795361 kubelet[2577]: E1009 01:01:20.795345 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:20.795401 kubelet[2577]: W1009 01:01:20.795362 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:20.795401 kubelet[2577]: E1009 01:01:20.795377 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:20.795604 kubelet[2577]: E1009 01:01:20.795593 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:20.795604 kubelet[2577]: W1009 01:01:20.795604 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:20.795668 kubelet[2577]: E1009 01:01:20.795613 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:20.795893 kubelet[2577]: E1009 01:01:20.795872 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:20.795934 kubelet[2577]: W1009 01:01:20.795895 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:20.795934 kubelet[2577]: E1009 01:01:20.795910 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:20.796156 kubelet[2577]: E1009 01:01:20.796143 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:20.796156 kubelet[2577]: W1009 01:01:20.796155 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:20.796222 kubelet[2577]: E1009 01:01:20.796166 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:20.796344 kubelet[2577]: E1009 01:01:20.796334 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:20.796344 kubelet[2577]: W1009 01:01:20.796344 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:20.796407 kubelet[2577]: E1009 01:01:20.796352 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:20.796683 kubelet[2577]: E1009 01:01:20.796661 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:20.796683 kubelet[2577]: W1009 01:01:20.796680 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:20.796813 kubelet[2577]: E1009 01:01:20.796694 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:20.797015 kubelet[2577]: E1009 01:01:20.796999 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:20.797062 kubelet[2577]: W1009 01:01:20.797015 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:20.797062 kubelet[2577]: E1009 01:01:20.797030 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:20.797491 kubelet[2577]: E1009 01:01:20.797414 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:20.797491 kubelet[2577]: W1009 01:01:20.797464 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:20.797491 kubelet[2577]: E1009 01:01:20.797489 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:20.804023 kubelet[2577]: E1009 01:01:20.803985 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:20.804429 kubelet[2577]: W1009 01:01:20.804228 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:20.804429 kubelet[2577]: E1009 01:01:20.804264 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:20.804985 kubelet[2577]: E1009 01:01:20.804811 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:20.804985 kubelet[2577]: W1009 01:01:20.804828 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:20.804985 kubelet[2577]: E1009 01:01:20.804851 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:20.805265 kubelet[2577]: E1009 01:01:20.805252 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:20.805331 kubelet[2577]: W1009 01:01:20.805320 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:20.805420 kubelet[2577]: E1009 01:01:20.805409 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:20.805862 kubelet[2577]: E1009 01:01:20.805746 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:20.805862 kubelet[2577]: W1009 01:01:20.805775 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:20.805862 kubelet[2577]: E1009 01:01:20.805799 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:20.806099 kubelet[2577]: E1009 01:01:20.806044 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:20.806099 kubelet[2577]: W1009 01:01:20.806080 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:20.806192 kubelet[2577]: E1009 01:01:20.806099 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:20.806301 kubelet[2577]: E1009 01:01:20.806279 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:20.806301 kubelet[2577]: W1009 01:01:20.806298 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:20.806497 kubelet[2577]: E1009 01:01:20.806388 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:20.806704 kubelet[2577]: E1009 01:01:20.806684 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:20.806756 kubelet[2577]: W1009 01:01:20.806706 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:20.806806 kubelet[2577]: E1009 01:01:20.806796 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:20.806930 kubelet[2577]: E1009 01:01:20.806918 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:20.806930 kubelet[2577]: W1009 01:01:20.806928 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:20.807044 kubelet[2577]: E1009 01:01:20.807029 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:20.807164 kubelet[2577]: E1009 01:01:20.807152 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:20.807164 kubelet[2577]: W1009 01:01:20.807162 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:20.807469 kubelet[2577]: E1009 01:01:20.807176 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:20.807614 kubelet[2577]: E1009 01:01:20.807596 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:20.807816 kubelet[2577]: W1009 01:01:20.807700 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:20.807816 kubelet[2577]: E1009 01:01:20.807733 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:20.808404 kubelet[2577]: E1009 01:01:20.808226 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:20.808404 kubelet[2577]: W1009 01:01:20.808269 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:20.808404 kubelet[2577]: E1009 01:01:20.808303 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:20.808763 kubelet[2577]: E1009 01:01:20.808699 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:20.808763 kubelet[2577]: W1009 01:01:20.808715 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:20.808763 kubelet[2577]: E1009 01:01:20.808743 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:20.808934 kubelet[2577]: E1009 01:01:20.808921 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:20.808934 kubelet[2577]: W1009 01:01:20.808934 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:20.809031 kubelet[2577]: E1009 01:01:20.809001 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:20.809128 kubelet[2577]: E1009 01:01:20.809117 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:20.809128 kubelet[2577]: W1009 01:01:20.809126 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:20.809244 kubelet[2577]: E1009 01:01:20.809228 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:20.809378 kubelet[2577]: E1009 01:01:20.809362 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:20.809378 kubelet[2577]: W1009 01:01:20.809373 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:20.809378 kubelet[2577]: E1009 01:01:20.809387 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:20.810224 kubelet[2577]: E1009 01:01:20.809957 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:20.810224 kubelet[2577]: W1009 01:01:20.809976 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:20.810224 kubelet[2577]: E1009 01:01:20.809999 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:20.810599 kubelet[2577]: E1009 01:01:20.810249 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:20.810599 kubelet[2577]: W1009 01:01:20.810260 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:20.810599 kubelet[2577]: E1009 01:01:20.810297 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:20.810742 kubelet[2577]: E1009 01:01:20.810664 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:20.810742 kubelet[2577]: W1009 01:01:20.810675 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:20.810742 kubelet[2577]: E1009 01:01:20.810686 2577 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:21.460200 containerd[1474]: time="2024-10-09T01:01:21.460108426Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:01:21.461659 containerd[1474]: time="2024-10-09T01:01:21.461608910Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1: active requests=0, bytes read=5141007" Oct 9 01:01:21.462839 containerd[1474]: time="2024-10-09T01:01:21.462804811Z" level=info msg="ImageCreate event name:\"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:01:21.465284 containerd[1474]: time="2024-10-09T01:01:21.465210295Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:01:21.466804 containerd[1474]: time="2024-10-09T01:01:21.466759552Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" with image id \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\", size \"6633368\" in 1.473408917s" Oct 9 01:01:21.466804 containerd[1474]: time="2024-10-09T01:01:21.466810815Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" returns image reference \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\"" Oct 9 01:01:21.470203 containerd[1474]: time="2024-10-09T01:01:21.470157342Z" level=info msg="CreateContainer within sandbox \"0c470cdda0053f9bef9217c11f754eaabff7d13a1e0219f76dc7020e319feb11\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 9 01:01:21.503087 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1921506662.mount: Deactivated successfully. Oct 9 01:01:21.512137 containerd[1474]: time="2024-10-09T01:01:21.511755616Z" level=info msg="CreateContainer within sandbox \"0c470cdda0053f9bef9217c11f754eaabff7d13a1e0219f76dc7020e319feb11\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"966e3c4766fc9a5daa46d6da79efe6137dd10a48085b86064faf4cb094421efd\"" Oct 9 01:01:21.513397 containerd[1474]: time="2024-10-09T01:01:21.513357894Z" level=info msg="StartContainer for \"966e3c4766fc9a5daa46d6da79efe6137dd10a48085b86064faf4cb094421efd\"" Oct 9 01:01:21.588749 systemd[1]: Started cri-containerd-966e3c4766fc9a5daa46d6da79efe6137dd10a48085b86064faf4cb094421efd.scope - libcontainer container 966e3c4766fc9a5daa46d6da79efe6137dd10a48085b86064faf4cb094421efd. Oct 9 01:01:21.657635 containerd[1474]: time="2024-10-09T01:01:21.656938353Z" level=info msg="StartContainer for \"966e3c4766fc9a5daa46d6da79efe6137dd10a48085b86064faf4cb094421efd\" returns successfully" Oct 9 01:01:21.673711 systemd[1]: cri-containerd-966e3c4766fc9a5daa46d6da79efe6137dd10a48085b86064faf4cb094421efd.scope: Deactivated successfully. Oct 9 01:01:21.743584 kubelet[2577]: I1009 01:01:21.743202 2577 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 9 01:01:21.744176 kubelet[2577]: E1009 01:01:21.743617 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 01:01:21.744267 containerd[1474]: time="2024-10-09T01:01:21.744158247Z" level=info msg="shim disconnected" id=966e3c4766fc9a5daa46d6da79efe6137dd10a48085b86064faf4cb094421efd namespace=k8s.io Oct 9 01:01:21.744267 containerd[1474]: time="2024-10-09T01:01:21.744212032Z" level=warning msg="cleaning up after shim disconnected" id=966e3c4766fc9a5daa46d6da79efe6137dd10a48085b86064faf4cb094421efd namespace=k8s.io Oct 9 01:01:21.744267 containerd[1474]: time="2024-10-09T01:01:21.744220776Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 01:01:21.747199 kubelet[2577]: E1009 01:01:21.746750 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 01:01:21.777093 kubelet[2577]: I1009 01:01:21.776761 2577 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5bc56654c7-7d7s5" podStartSLOduration=3.117936006 podStartE2EDuration="5.776743237s" podCreationTimestamp="2024-10-09 01:01:16 +0000 UTC" firstStartedPulling="2024-10-09 01:01:17.333639915 +0000 UTC m=+18.867473436" lastFinishedPulling="2024-10-09 01:01:19.992447148 +0000 UTC m=+21.526280667" observedRunningTime="2024-10-09 01:01:20.755221301 +0000 UTC m=+22.289054839" watchObservedRunningTime="2024-10-09 01:01:21.776743237 +0000 UTC m=+23.310576789" Oct 9 01:01:22.486419 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-966e3c4766fc9a5daa46d6da79efe6137dd10a48085b86064faf4cb094421efd-rootfs.mount: Deactivated successfully. Oct 9 01:01:22.602090 kubelet[2577]: E1009 01:01:22.600812 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p8pzw" podUID="e8f89e29-cfb4-4b5e-a7cb-66ed0f9162bb" Oct 9 01:01:22.748945 kubelet[2577]: E1009 01:01:22.748799 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 01:01:22.753986 containerd[1474]: time="2024-10-09T01:01:22.753838471Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\"" Oct 9 01:01:24.601499 kubelet[2577]: E1009 01:01:24.600929 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p8pzw" podUID="e8f89e29-cfb4-4b5e-a7cb-66ed0f9162bb" Oct 9 01:01:26.601337 kubelet[2577]: E1009 01:01:26.601269 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p8pzw" podUID="e8f89e29-cfb4-4b5e-a7cb-66ed0f9162bb" Oct 9 01:01:26.736098 containerd[1474]: time="2024-10-09T01:01:26.736017734Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:01:26.737652 containerd[1474]: time="2024-10-09T01:01:26.737366073Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.1: active requests=0, bytes read=93083736" Oct 9 01:01:26.739211 containerd[1474]: time="2024-10-09T01:01:26.738472493Z" level=info msg="ImageCreate event name:\"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:01:26.743459 containerd[1474]: time="2024-10-09T01:01:26.743385570Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:01:26.744683 containerd[1474]: time="2024-10-09T01:01:26.744626098Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.1\" with image id \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\", size \"94576137\" in 3.990015427s" Oct 9 01:01:26.744894 containerd[1474]: time="2024-10-09T01:01:26.744867707Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\" returns image reference \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\"" Oct 9 01:01:26.748774 containerd[1474]: time="2024-10-09T01:01:26.748718270Z" level=info msg="CreateContainer within sandbox \"0c470cdda0053f9bef9217c11f754eaabff7d13a1e0219f76dc7020e319feb11\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 9 01:01:26.797155 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1347157841.mount: Deactivated successfully. Oct 9 01:01:26.804752 containerd[1474]: time="2024-10-09T01:01:26.804634180Z" level=info msg="CreateContainer within sandbox \"0c470cdda0053f9bef9217c11f754eaabff7d13a1e0219f76dc7020e319feb11\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"5aa74b7807f2de0d50cd9fe886cbbe5969163188e9c330edd77da3947e38c65a\"" Oct 9 01:01:26.805125 containerd[1474]: time="2024-10-09T01:01:26.805086046Z" level=info msg="StartContainer for \"5aa74b7807f2de0d50cd9fe886cbbe5969163188e9c330edd77da3947e38c65a\"" Oct 9 01:01:26.926113 systemd[1]: Started cri-containerd-5aa74b7807f2de0d50cd9fe886cbbe5969163188e9c330edd77da3947e38c65a.scope - libcontainer container 5aa74b7807f2de0d50cd9fe886cbbe5969163188e9c330edd77da3947e38c65a. Oct 9 01:01:26.975176 containerd[1474]: time="2024-10-09T01:01:26.975062210Z" level=info msg="StartContainer for \"5aa74b7807f2de0d50cd9fe886cbbe5969163188e9c330edd77da3947e38c65a\" returns successfully" Oct 9 01:01:27.620664 systemd[1]: cri-containerd-5aa74b7807f2de0d50cd9fe886cbbe5969163188e9c330edd77da3947e38c65a.scope: Deactivated successfully. Oct 9 01:01:27.663075 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5aa74b7807f2de0d50cd9fe886cbbe5969163188e9c330edd77da3947e38c65a-rootfs.mount: Deactivated successfully. Oct 9 01:01:27.666357 containerd[1474]: time="2024-10-09T01:01:27.666211572Z" level=info msg="shim disconnected" id=5aa74b7807f2de0d50cd9fe886cbbe5969163188e9c330edd77da3947e38c65a namespace=k8s.io Oct 9 01:01:27.666357 containerd[1474]: time="2024-10-09T01:01:27.666349093Z" level=warning msg="cleaning up after shim disconnected" id=5aa74b7807f2de0d50cd9fe886cbbe5969163188e9c330edd77da3947e38c65a namespace=k8s.io Oct 9 01:01:27.666357 containerd[1474]: time="2024-10-09T01:01:27.666360149Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 01:01:27.732989 kubelet[2577]: I1009 01:01:27.732941 2577 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Oct 9 01:01:27.786000 kubelet[2577]: E1009 01:01:27.784468 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 01:01:27.786399 containerd[1474]: time="2024-10-09T01:01:27.786364976Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\"" Oct 9 01:01:27.796698 systemd[1]: Created slice kubepods-burstable-pod79b0b543_5f0c_4dfb_9dc8_bfadb1e6489f.slice - libcontainer container kubepods-burstable-pod79b0b543_5f0c_4dfb_9dc8_bfadb1e6489f.slice. Oct 9 01:01:27.811944 systemd[1]: Created slice kubepods-burstable-pod104dbbd7_e31c_46d2_8ae4_2ce3a9ced8ae.slice - libcontainer container kubepods-burstable-pod104dbbd7_e31c_46d2_8ae4_2ce3a9ced8ae.slice. Oct 9 01:01:27.833478 systemd[1]: Created slice kubepods-besteffort-podc458a646_0669_47f2_97fc_a34bf29c9bc5.slice - libcontainer container kubepods-besteffort-podc458a646_0669_47f2_97fc_a34bf29c9bc5.slice. Oct 9 01:01:27.858583 kubelet[2577]: I1009 01:01:27.858521 2577 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5d2rz\" (UniqueName: \"kubernetes.io/projected/c458a646-0669-47f2-97fc-a34bf29c9bc5-kube-api-access-5d2rz\") pod \"calico-kube-controllers-5d99d68f9d-swq27\" (UID: \"c458a646-0669-47f2-97fc-a34bf29c9bc5\") " pod="calico-system/calico-kube-controllers-5d99d68f9d-swq27" Oct 9 01:01:27.859471 kubelet[2577]: I1009 01:01:27.859011 2577 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c458a646-0669-47f2-97fc-a34bf29c9bc5-tigera-ca-bundle\") pod \"calico-kube-controllers-5d99d68f9d-swq27\" (UID: \"c458a646-0669-47f2-97fc-a34bf29c9bc5\") " pod="calico-system/calico-kube-controllers-5d99d68f9d-swq27" Oct 9 01:01:27.859471 kubelet[2577]: I1009 01:01:27.859065 2577 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5p7vv\" (UniqueName: \"kubernetes.io/projected/79b0b543-5f0c-4dfb-9dc8-bfadb1e6489f-kube-api-access-5p7vv\") pod \"coredns-6f6b679f8f-f6s92\" (UID: \"79b0b543-5f0c-4dfb-9dc8-bfadb1e6489f\") " pod="kube-system/coredns-6f6b679f8f-f6s92" Oct 9 01:01:27.859471 kubelet[2577]: I1009 01:01:27.859090 2577 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/104dbbd7-e31c-46d2-8ae4-2ce3a9ced8ae-config-volume\") pod \"coredns-6f6b679f8f-d95gg\" (UID: \"104dbbd7-e31c-46d2-8ae4-2ce3a9ced8ae\") " pod="kube-system/coredns-6f6b679f8f-d95gg" Oct 9 01:01:27.859471 kubelet[2577]: I1009 01:01:27.859121 2577 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/79b0b543-5f0c-4dfb-9dc8-bfadb1e6489f-config-volume\") pod \"coredns-6f6b679f8f-f6s92\" (UID: \"79b0b543-5f0c-4dfb-9dc8-bfadb1e6489f\") " pod="kube-system/coredns-6f6b679f8f-f6s92" Oct 9 01:01:27.859471 kubelet[2577]: I1009 01:01:27.859149 2577 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9nvn\" (UniqueName: \"kubernetes.io/projected/104dbbd7-e31c-46d2-8ae4-2ce3a9ced8ae-kube-api-access-v9nvn\") pod \"coredns-6f6b679f8f-d95gg\" (UID: \"104dbbd7-e31c-46d2-8ae4-2ce3a9ced8ae\") " pod="kube-system/coredns-6f6b679f8f-d95gg" Oct 9 01:01:28.105476 kubelet[2577]: E1009 01:01:28.105246 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 01:01:28.106737 containerd[1474]: time="2024-10-09T01:01:28.106392982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-f6s92,Uid:79b0b543-5f0c-4dfb-9dc8-bfadb1e6489f,Namespace:kube-system,Attempt:0,}" Oct 9 01:01:28.124547 kubelet[2577]: E1009 01:01:28.123774 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 01:01:28.125315 containerd[1474]: time="2024-10-09T01:01:28.124986354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-d95gg,Uid:104dbbd7-e31c-46d2-8ae4-2ce3a9ced8ae,Namespace:kube-system,Attempt:0,}" Oct 9 01:01:28.150896 containerd[1474]: time="2024-10-09T01:01:28.150828659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d99d68f9d-swq27,Uid:c458a646-0669-47f2-97fc-a34bf29c9bc5,Namespace:calico-system,Attempt:0,}" Oct 9 01:01:28.449726 containerd[1474]: time="2024-10-09T01:01:28.449567260Z" level=error msg="Failed to destroy network for sandbox \"98b08d0e294dc569debc0a079cdd08bc4658684d5776dc9a5c7210ad6ddf7f31\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:01:28.453695 containerd[1474]: time="2024-10-09T01:01:28.453621631Z" level=error msg="Failed to destroy network for sandbox \"878f1c5b627ec68e440fed59fff778ab9002a0799f2ad44383463214bcb08686\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:01:28.456852 containerd[1474]: time="2024-10-09T01:01:28.455806576Z" level=error msg="Failed to destroy network for sandbox \"6e7c3dec46c9feef948bdd48d45fb2c074d33970aa4adfa01666c39fcdf232b9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:01:28.458595 containerd[1474]: time="2024-10-09T01:01:28.458530789Z" level=error msg="encountered an error cleaning up failed sandbox \"6e7c3dec46c9feef948bdd48d45fb2c074d33970aa4adfa01666c39fcdf232b9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:01:28.458803 containerd[1474]: time="2024-10-09T01:01:28.458762245Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-d95gg,Uid:104dbbd7-e31c-46d2-8ae4-2ce3a9ced8ae,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6e7c3dec46c9feef948bdd48d45fb2c074d33970aa4adfa01666c39fcdf232b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:01:28.471574 containerd[1474]: time="2024-10-09T01:01:28.458966595Z" level=error msg="encountered an error cleaning up failed sandbox \"878f1c5b627ec68e440fed59fff778ab9002a0799f2ad44383463214bcb08686\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:01:28.471980 containerd[1474]: time="2024-10-09T01:01:28.471929485Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d99d68f9d-swq27,Uid:c458a646-0669-47f2-97fc-a34bf29c9bc5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"878f1c5b627ec68e440fed59fff778ab9002a0799f2ad44383463214bcb08686\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:01:28.472785 containerd[1474]: time="2024-10-09T01:01:28.458537727Z" level=error msg="encountered an error cleaning up failed sandbox \"98b08d0e294dc569debc0a079cdd08bc4658684d5776dc9a5c7210ad6ddf7f31\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:01:28.473362 kubelet[2577]: E1009 01:01:28.473097 2577 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"878f1c5b627ec68e440fed59fff778ab9002a0799f2ad44383463214bcb08686\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:01:28.473362 kubelet[2577]: E1009 01:01:28.473206 2577 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"878f1c5b627ec68e440fed59fff778ab9002a0799f2ad44383463214bcb08686\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5d99d68f9d-swq27" Oct 9 01:01:28.474232 containerd[1474]: time="2024-10-09T01:01:28.473727715Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-f6s92,Uid:79b0b543-5f0c-4dfb-9dc8-bfadb1e6489f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"98b08d0e294dc569debc0a079cdd08bc4658684d5776dc9a5c7210ad6ddf7f31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:01:28.474339 kubelet[2577]: E1009 01:01:28.473889 2577 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e7c3dec46c9feef948bdd48d45fb2c074d33970aa4adfa01666c39fcdf232b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:01:28.474339 kubelet[2577]: E1009 01:01:28.473962 2577 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e7c3dec46c9feef948bdd48d45fb2c074d33970aa4adfa01666c39fcdf232b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-d95gg" Oct 9 01:01:28.481889 kubelet[2577]: E1009 01:01:28.479710 2577 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"878f1c5b627ec68e440fed59fff778ab9002a0799f2ad44383463214bcb08686\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5d99d68f9d-swq27" Oct 9 01:01:28.481889 kubelet[2577]: E1009 01:01:28.479844 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5d99d68f9d-swq27_calico-system(c458a646-0669-47f2-97fc-a34bf29c9bc5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5d99d68f9d-swq27_calico-system(c458a646-0669-47f2-97fc-a34bf29c9bc5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"878f1c5b627ec68e440fed59fff778ab9002a0799f2ad44383463214bcb08686\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5d99d68f9d-swq27" podUID="c458a646-0669-47f2-97fc-a34bf29c9bc5" Oct 9 01:01:28.481889 kubelet[2577]: E1009 01:01:28.480182 2577 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98b08d0e294dc569debc0a079cdd08bc4658684d5776dc9a5c7210ad6ddf7f31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:01:28.482783 kubelet[2577]: E1009 01:01:28.480220 2577 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98b08d0e294dc569debc0a079cdd08bc4658684d5776dc9a5c7210ad6ddf7f31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-f6s92" Oct 9 01:01:28.482783 kubelet[2577]: E1009 01:01:28.480243 2577 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98b08d0e294dc569debc0a079cdd08bc4658684d5776dc9a5c7210ad6ddf7f31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-f6s92" Oct 9 01:01:28.482783 kubelet[2577]: E1009 01:01:28.480281 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-f6s92_kube-system(79b0b543-5f0c-4dfb-9dc8-bfadb1e6489f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-f6s92_kube-system(79b0b543-5f0c-4dfb-9dc8-bfadb1e6489f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"98b08d0e294dc569debc0a079cdd08bc4658684d5776dc9a5c7210ad6ddf7f31\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-f6s92" podUID="79b0b543-5f0c-4dfb-9dc8-bfadb1e6489f" Oct 9 01:01:28.483060 kubelet[2577]: E1009 01:01:28.482250 2577 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e7c3dec46c9feef948bdd48d45fb2c074d33970aa4adfa01666c39fcdf232b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-d95gg" Oct 9 01:01:28.483186 kubelet[2577]: E1009 01:01:28.482379 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-d95gg_kube-system(104dbbd7-e31c-46d2-8ae4-2ce3a9ced8ae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-d95gg_kube-system(104dbbd7-e31c-46d2-8ae4-2ce3a9ced8ae)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6e7c3dec46c9feef948bdd48d45fb2c074d33970aa4adfa01666c39fcdf232b9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-d95gg" podUID="104dbbd7-e31c-46d2-8ae4-2ce3a9ced8ae" Oct 9 01:01:28.609403 systemd[1]: Created slice kubepods-besteffort-pode8f89e29_cfb4_4b5e_a7cb_66ed0f9162bb.slice - libcontainer container kubepods-besteffort-pode8f89e29_cfb4_4b5e_a7cb_66ed0f9162bb.slice. Oct 9 01:01:28.613875 containerd[1474]: time="2024-10-09T01:01:28.613823133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-p8pzw,Uid:e8f89e29-cfb4-4b5e-a7cb-66ed0f9162bb,Namespace:calico-system,Attempt:0,}" Oct 9 01:01:28.715394 containerd[1474]: time="2024-10-09T01:01:28.714318816Z" level=error msg="Failed to destroy network for sandbox \"fddcaff6240e61f23bf00c04f4fd4e016011a8807c84e804e6f11e18c3d4f74b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:01:28.715525 containerd[1474]: time="2024-10-09T01:01:28.715487891Z" level=error msg="encountered an error cleaning up failed sandbox \"fddcaff6240e61f23bf00c04f4fd4e016011a8807c84e804e6f11e18c3d4f74b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:01:28.715608 containerd[1474]: time="2024-10-09T01:01:28.715578812Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-p8pzw,Uid:e8f89e29-cfb4-4b5e-a7cb-66ed0f9162bb,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fddcaff6240e61f23bf00c04f4fd4e016011a8807c84e804e6f11e18c3d4f74b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:01:28.716000 kubelet[2577]: E1009 01:01:28.715949 2577 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fddcaff6240e61f23bf00c04f4fd4e016011a8807c84e804e6f11e18c3d4f74b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:01:28.716064 kubelet[2577]: E1009 01:01:28.716020 2577 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fddcaff6240e61f23bf00c04f4fd4e016011a8807c84e804e6f11e18c3d4f74b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-p8pzw" Oct 9 01:01:28.716064 kubelet[2577]: E1009 01:01:28.716043 2577 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fddcaff6240e61f23bf00c04f4fd4e016011a8807c84e804e6f11e18c3d4f74b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-p8pzw" Oct 9 01:01:28.716641 kubelet[2577]: E1009 01:01:28.716103 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-p8pzw_calico-system(e8f89e29-cfb4-4b5e-a7cb-66ed0f9162bb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-p8pzw_calico-system(e8f89e29-cfb4-4b5e-a7cb-66ed0f9162bb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fddcaff6240e61f23bf00c04f4fd4e016011a8807c84e804e6f11e18c3d4f74b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-p8pzw" podUID="e8f89e29-cfb4-4b5e-a7cb-66ed0f9162bb" Oct 9 01:01:28.787607 kubelet[2577]: I1009 01:01:28.787555 2577 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fddcaff6240e61f23bf00c04f4fd4e016011a8807c84e804e6f11e18c3d4f74b" Oct 9 01:01:28.788672 containerd[1474]: time="2024-10-09T01:01:28.788504558Z" level=info msg="StopPodSandbox for \"fddcaff6240e61f23bf00c04f4fd4e016011a8807c84e804e6f11e18c3d4f74b\"" Oct 9 01:01:28.793042 kubelet[2577]: I1009 01:01:28.792110 2577 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e7c3dec46c9feef948bdd48d45fb2c074d33970aa4adfa01666c39fcdf232b9" Oct 9 01:01:28.794708 containerd[1474]: time="2024-10-09T01:01:28.794215589Z" level=info msg="StopPodSandbox for \"6e7c3dec46c9feef948bdd48d45fb2c074d33970aa4adfa01666c39fcdf232b9\"" Oct 9 01:01:28.804420 containerd[1474]: time="2024-10-09T01:01:28.804375516Z" level=info msg="Ensure that sandbox 6e7c3dec46c9feef948bdd48d45fb2c074d33970aa4adfa01666c39fcdf232b9 in task-service has been cleanup successfully" Oct 9 01:01:28.809124 containerd[1474]: time="2024-10-09T01:01:28.809069594Z" level=info msg="Ensure that sandbox fddcaff6240e61f23bf00c04f4fd4e016011a8807c84e804e6f11e18c3d4f74b in task-service has been cleanup successfully" Oct 9 01:01:28.812347 kubelet[2577]: I1009 01:01:28.811026 2577 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98b08d0e294dc569debc0a079cdd08bc4658684d5776dc9a5c7210ad6ddf7f31" Oct 9 01:01:28.814248 containerd[1474]: time="2024-10-09T01:01:28.814156262Z" level=info msg="StopPodSandbox for \"98b08d0e294dc569debc0a079cdd08bc4658684d5776dc9a5c7210ad6ddf7f31\"" Oct 9 01:01:28.816546 containerd[1474]: time="2024-10-09T01:01:28.816505507Z" level=info msg="Ensure that sandbox 98b08d0e294dc569debc0a079cdd08bc4658684d5776dc9a5c7210ad6ddf7f31 in task-service has been cleanup successfully" Oct 9 01:01:28.820510 kubelet[2577]: I1009 01:01:28.820281 2577 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="878f1c5b627ec68e440fed59fff778ab9002a0799f2ad44383463214bcb08686" Oct 9 01:01:28.822524 containerd[1474]: time="2024-10-09T01:01:28.821996491Z" level=info msg="StopPodSandbox for \"878f1c5b627ec68e440fed59fff778ab9002a0799f2ad44383463214bcb08686\"" Oct 9 01:01:28.822524 containerd[1474]: time="2024-10-09T01:01:28.822258450Z" level=info msg="Ensure that sandbox 878f1c5b627ec68e440fed59fff778ab9002a0799f2ad44383463214bcb08686 in task-service has been cleanup successfully" Oct 9 01:01:28.911843 containerd[1474]: time="2024-10-09T01:01:28.911784214Z" level=error msg="StopPodSandbox for \"98b08d0e294dc569debc0a079cdd08bc4658684d5776dc9a5c7210ad6ddf7f31\" failed" error="failed to destroy network for sandbox \"98b08d0e294dc569debc0a079cdd08bc4658684d5776dc9a5c7210ad6ddf7f31\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:01:28.912112 kubelet[2577]: E1009 01:01:28.912041 2577 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"98b08d0e294dc569debc0a079cdd08bc4658684d5776dc9a5c7210ad6ddf7f31\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="98b08d0e294dc569debc0a079cdd08bc4658684d5776dc9a5c7210ad6ddf7f31" Oct 9 01:01:28.912234 kubelet[2577]: E1009 01:01:28.912105 2577 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"98b08d0e294dc569debc0a079cdd08bc4658684d5776dc9a5c7210ad6ddf7f31"} Oct 9 01:01:28.912234 kubelet[2577]: E1009 01:01:28.912169 2577 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"79b0b543-5f0c-4dfb-9dc8-bfadb1e6489f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"98b08d0e294dc569debc0a079cdd08bc4658684d5776dc9a5c7210ad6ddf7f31\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 01:01:28.912234 kubelet[2577]: E1009 01:01:28.912199 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"79b0b543-5f0c-4dfb-9dc8-bfadb1e6489f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"98b08d0e294dc569debc0a079cdd08bc4658684d5776dc9a5c7210ad6ddf7f31\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-f6s92" podUID="79b0b543-5f0c-4dfb-9dc8-bfadb1e6489f" Oct 9 01:01:28.915338 containerd[1474]: time="2024-10-09T01:01:28.915277784Z" level=error msg="StopPodSandbox for \"fddcaff6240e61f23bf00c04f4fd4e016011a8807c84e804e6f11e18c3d4f74b\" failed" error="failed to destroy network for sandbox \"fddcaff6240e61f23bf00c04f4fd4e016011a8807c84e804e6f11e18c3d4f74b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:01:28.916067 kubelet[2577]: E1009 01:01:28.915799 2577 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fddcaff6240e61f23bf00c04f4fd4e016011a8807c84e804e6f11e18c3d4f74b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fddcaff6240e61f23bf00c04f4fd4e016011a8807c84e804e6f11e18c3d4f74b" Oct 9 01:01:28.916067 kubelet[2577]: E1009 01:01:28.915900 2577 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fddcaff6240e61f23bf00c04f4fd4e016011a8807c84e804e6f11e18c3d4f74b"} Oct 9 01:01:28.916067 kubelet[2577]: E1009 01:01:28.915987 2577 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e8f89e29-cfb4-4b5e-a7cb-66ed0f9162bb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fddcaff6240e61f23bf00c04f4fd4e016011a8807c84e804e6f11e18c3d4f74b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 01:01:28.916067 kubelet[2577]: E1009 01:01:28.916021 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e8f89e29-cfb4-4b5e-a7cb-66ed0f9162bb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fddcaff6240e61f23bf00c04f4fd4e016011a8807c84e804e6f11e18c3d4f74b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-p8pzw" podUID="e8f89e29-cfb4-4b5e-a7cb-66ed0f9162bb" Oct 9 01:01:28.922078 containerd[1474]: time="2024-10-09T01:01:28.922013098Z" level=error msg="StopPodSandbox for \"6e7c3dec46c9feef948bdd48d45fb2c074d33970aa4adfa01666c39fcdf232b9\" failed" error="failed to destroy network for sandbox \"6e7c3dec46c9feef948bdd48d45fb2c074d33970aa4adfa01666c39fcdf232b9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:01:28.922528 kubelet[2577]: E1009 01:01:28.922472 2577 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6e7c3dec46c9feef948bdd48d45fb2c074d33970aa4adfa01666c39fcdf232b9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6e7c3dec46c9feef948bdd48d45fb2c074d33970aa4adfa01666c39fcdf232b9" Oct 9 01:01:28.924021 kubelet[2577]: E1009 01:01:28.922935 2577 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6e7c3dec46c9feef948bdd48d45fb2c074d33970aa4adfa01666c39fcdf232b9"} Oct 9 01:01:28.924021 kubelet[2577]: E1009 01:01:28.922993 2577 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"104dbbd7-e31c-46d2-8ae4-2ce3a9ced8ae\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6e7c3dec46c9feef948bdd48d45fb2c074d33970aa4adfa01666c39fcdf232b9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 01:01:28.924021 kubelet[2577]: E1009 01:01:28.923024 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"104dbbd7-e31c-46d2-8ae4-2ce3a9ced8ae\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6e7c3dec46c9feef948bdd48d45fb2c074d33970aa4adfa01666c39fcdf232b9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-d95gg" podUID="104dbbd7-e31c-46d2-8ae4-2ce3a9ced8ae" Oct 9 01:01:28.924927 containerd[1474]: time="2024-10-09T01:01:28.924845473Z" level=error msg="StopPodSandbox for \"878f1c5b627ec68e440fed59fff778ab9002a0799f2ad44383463214bcb08686\" failed" error="failed to destroy network for sandbox \"878f1c5b627ec68e440fed59fff778ab9002a0799f2ad44383463214bcb08686\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:01:28.925332 kubelet[2577]: E1009 01:01:28.925267 2577 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"878f1c5b627ec68e440fed59fff778ab9002a0799f2ad44383463214bcb08686\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="878f1c5b627ec68e440fed59fff778ab9002a0799f2ad44383463214bcb08686" Oct 9 01:01:28.925395 kubelet[2577]: E1009 01:01:28.925370 2577 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"878f1c5b627ec68e440fed59fff778ab9002a0799f2ad44383463214bcb08686"} Oct 9 01:01:28.925549 kubelet[2577]: E1009 01:01:28.925424 2577 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c458a646-0669-47f2-97fc-a34bf29c9bc5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"878f1c5b627ec68e440fed59fff778ab9002a0799f2ad44383463214bcb08686\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 01:01:28.925618 kubelet[2577]: E1009 01:01:28.925568 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c458a646-0669-47f2-97fc-a34bf29c9bc5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"878f1c5b627ec68e440fed59fff778ab9002a0799f2ad44383463214bcb08686\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5d99d68f9d-swq27" podUID="c458a646-0669-47f2-97fc-a34bf29c9bc5" Oct 9 01:01:28.993930 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6e7c3dec46c9feef948bdd48d45fb2c074d33970aa4adfa01666c39fcdf232b9-shm.mount: Deactivated successfully. Oct 9 01:01:28.994061 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-98b08d0e294dc569debc0a079cdd08bc4658684d5776dc9a5c7210ad6ddf7f31-shm.mount: Deactivated successfully. Oct 9 01:01:33.490582 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4280559315.mount: Deactivated successfully. Oct 9 01:01:33.533060 containerd[1474]: time="2024-10-09T01:01:33.532763329Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:01:33.537496 containerd[1474]: time="2024-10-09T01:01:33.535797139Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.1: active requests=0, bytes read=117873564" Oct 9 01:01:33.543096 containerd[1474]: time="2024-10-09T01:01:33.542699121Z" level=info msg="ImageCreate event name:\"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:01:33.547659 containerd[1474]: time="2024-10-09T01:01:33.547610452Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:01:33.547960 containerd[1474]: time="2024-10-09T01:01:33.547924698Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.1\" with image id \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\", size \"117873426\" in 5.76152593s" Oct 9 01:01:33.548047 containerd[1474]: time="2024-10-09T01:01:33.547964261Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\" returns image reference \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\"" Oct 9 01:01:33.608351 containerd[1474]: time="2024-10-09T01:01:33.608294481Z" level=info msg="CreateContainer within sandbox \"0c470cdda0053f9bef9217c11f754eaabff7d13a1e0219f76dc7020e319feb11\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 9 01:01:33.677935 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4102673263.mount: Deactivated successfully. Oct 9 01:01:33.679252 containerd[1474]: time="2024-10-09T01:01:33.679193381Z" level=info msg="CreateContainer within sandbox \"0c470cdda0053f9bef9217c11f754eaabff7d13a1e0219f76dc7020e319feb11\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"acf278550e19c496c7b63aa6c42c233c7f83d39995ccb69bee83ec8bb01824ce\"" Oct 9 01:01:33.682022 containerd[1474]: time="2024-10-09T01:01:33.681972001Z" level=info msg="StartContainer for \"acf278550e19c496c7b63aa6c42c233c7f83d39995ccb69bee83ec8bb01824ce\"" Oct 9 01:01:33.815760 systemd[1]: Started cri-containerd-acf278550e19c496c7b63aa6c42c233c7f83d39995ccb69bee83ec8bb01824ce.scope - libcontainer container acf278550e19c496c7b63aa6c42c233c7f83d39995ccb69bee83ec8bb01824ce. Oct 9 01:01:33.894534 containerd[1474]: time="2024-10-09T01:01:33.894416792Z" level=info msg="StartContainer for \"acf278550e19c496c7b63aa6c42c233c7f83d39995ccb69bee83ec8bb01824ce\" returns successfully" Oct 9 01:01:33.916276 kubelet[2577]: E1009 01:01:33.916192 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 01:01:33.958063 kubelet[2577]: I1009 01:01:33.957971 2577 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-xtblv" podStartSLOduration=1.860660855 podStartE2EDuration="17.957943921s" podCreationTimestamp="2024-10-09 01:01:16 +0000 UTC" firstStartedPulling="2024-10-09 01:01:17.45718999 +0000 UTC m=+18.991023513" lastFinishedPulling="2024-10-09 01:01:33.554473073 +0000 UTC m=+35.088306579" observedRunningTime="2024-10-09 01:01:33.954570202 +0000 UTC m=+35.488403729" watchObservedRunningTime="2024-10-09 01:01:33.957943921 +0000 UTC m=+35.491777451" Oct 9 01:01:34.033015 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 9 01:01:34.035041 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 9 01:01:34.913687 kubelet[2577]: E1009 01:01:34.912975 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 01:01:35.915761 kubelet[2577]: E1009 01:01:35.915132 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 01:01:40.600983 kubelet[2577]: I1009 01:01:40.599599 2577 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 9 01:01:40.600983 kubelet[2577]: E1009 01:01:40.600160 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 01:01:40.608520 containerd[1474]: time="2024-10-09T01:01:40.608427753Z" level=info msg="StopPodSandbox for \"fddcaff6240e61f23bf00c04f4fd4e016011a8807c84e804e6f11e18c3d4f74b\"" Oct 9 01:01:40.897246 containerd[1474]: 2024-10-09 01:01:40.709 [INFO][3907] k8s.go 608: Cleaning up netns ContainerID="fddcaff6240e61f23bf00c04f4fd4e016011a8807c84e804e6f11e18c3d4f74b" Oct 9 01:01:40.897246 containerd[1474]: 2024-10-09 01:01:40.710 [INFO][3907] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="fddcaff6240e61f23bf00c04f4fd4e016011a8807c84e804e6f11e18c3d4f74b" iface="eth0" netns="/var/run/netns/cni-bd0acecf-201b-f4a7-fcf9-1a30f2f35f99" Oct 9 01:01:40.897246 containerd[1474]: 2024-10-09 01:01:40.710 [INFO][3907] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="fddcaff6240e61f23bf00c04f4fd4e016011a8807c84e804e6f11e18c3d4f74b" iface="eth0" netns="/var/run/netns/cni-bd0acecf-201b-f4a7-fcf9-1a30f2f35f99" Oct 9 01:01:40.897246 containerd[1474]: 2024-10-09 01:01:40.711 [INFO][3907] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="fddcaff6240e61f23bf00c04f4fd4e016011a8807c84e804e6f11e18c3d4f74b" iface="eth0" netns="/var/run/netns/cni-bd0acecf-201b-f4a7-fcf9-1a30f2f35f99" Oct 9 01:01:40.897246 containerd[1474]: 2024-10-09 01:01:40.711 [INFO][3907] k8s.go 615: Releasing IP address(es) ContainerID="fddcaff6240e61f23bf00c04f4fd4e016011a8807c84e804e6f11e18c3d4f74b" Oct 9 01:01:40.897246 containerd[1474]: 2024-10-09 01:01:40.711 [INFO][3907] utils.go 188: Calico CNI releasing IP address ContainerID="fddcaff6240e61f23bf00c04f4fd4e016011a8807c84e804e6f11e18c3d4f74b" Oct 9 01:01:40.897246 containerd[1474]: 2024-10-09 01:01:40.876 [INFO][3915] ipam_plugin.go 417: Releasing address using handleID ContainerID="fddcaff6240e61f23bf00c04f4fd4e016011a8807c84e804e6f11e18c3d4f74b" HandleID="k8s-pod-network.fddcaff6240e61f23bf00c04f4fd4e016011a8807c84e804e6f11e18c3d4f74b" Workload="ci--4116.0.0--d--2a8a4ec573-k8s-csi--node--driver--p8pzw-eth0" Oct 9 01:01:40.897246 containerd[1474]: 2024-10-09 01:01:40.878 [INFO][3915] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:01:40.897246 containerd[1474]: 2024-10-09 01:01:40.878 [INFO][3915] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:01:40.897246 containerd[1474]: 2024-10-09 01:01:40.890 [WARNING][3915] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="fddcaff6240e61f23bf00c04f4fd4e016011a8807c84e804e6f11e18c3d4f74b" HandleID="k8s-pod-network.fddcaff6240e61f23bf00c04f4fd4e016011a8807c84e804e6f11e18c3d4f74b" Workload="ci--4116.0.0--d--2a8a4ec573-k8s-csi--node--driver--p8pzw-eth0" Oct 9 01:01:40.897246 containerd[1474]: 2024-10-09 01:01:40.890 [INFO][3915] ipam_plugin.go 445: Releasing address using workloadID ContainerID="fddcaff6240e61f23bf00c04f4fd4e016011a8807c84e804e6f11e18c3d4f74b" HandleID="k8s-pod-network.fddcaff6240e61f23bf00c04f4fd4e016011a8807c84e804e6f11e18c3d4f74b" Workload="ci--4116.0.0--d--2a8a4ec573-k8s-csi--node--driver--p8pzw-eth0" Oct 9 01:01:40.897246 containerd[1474]: 2024-10-09 01:01:40.892 [INFO][3915] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:01:40.897246 containerd[1474]: 2024-10-09 01:01:40.894 [INFO][3907] k8s.go 621: Teardown processing complete. ContainerID="fddcaff6240e61f23bf00c04f4fd4e016011a8807c84e804e6f11e18c3d4f74b" Oct 9 01:01:40.901506 containerd[1474]: time="2024-10-09T01:01:40.900569210Z" level=info msg="TearDown network for sandbox \"fddcaff6240e61f23bf00c04f4fd4e016011a8807c84e804e6f11e18c3d4f74b\" successfully" Oct 9 01:01:40.901506 containerd[1474]: time="2024-10-09T01:01:40.900607220Z" level=info msg="StopPodSandbox for \"fddcaff6240e61f23bf00c04f4fd4e016011a8807c84e804e6f11e18c3d4f74b\" returns successfully" Oct 9 01:01:40.901313 systemd[1]: run-netns-cni\x2dbd0acecf\x2d201b\x2df4a7\x2dfcf9\x2d1a30f2f35f99.mount: Deactivated successfully. Oct 9 01:01:40.918321 containerd[1474]: time="2024-10-09T01:01:40.918218265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-p8pzw,Uid:e8f89e29-cfb4-4b5e-a7cb-66ed0f9162bb,Namespace:calico-system,Attempt:1,}" Oct 9 01:01:40.929007 kubelet[2577]: E1009 01:01:40.928581 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 01:01:41.203852 systemd-networkd[1378]: cali78ccef8953d: Link UP Oct 9 01:01:41.204173 systemd-networkd[1378]: cali78ccef8953d: Gained carrier Oct 9 01:01:41.228375 containerd[1474]: 2024-10-09 01:01:41.041 [INFO][3922] utils.go 100: File /var/lib/calico/mtu does not exist Oct 9 01:01:41.228375 containerd[1474]: 2024-10-09 01:01:41.056 [INFO][3922] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4116.0.0--d--2a8a4ec573-k8s-csi--node--driver--p8pzw-eth0 csi-node-driver- calico-system e8f89e29-cfb4-4b5e-a7cb-66ed0f9162bb 710 0 2024-10-09 01:01:17 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:779867c8f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ci-4116.0.0-d-2a8a4ec573 csi-node-driver-p8pzw eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali78ccef8953d [] []}} ContainerID="b55ce21a95139a92ec4a09f9708447497842afad790a81f73ddc92127596b15c" Namespace="calico-system" Pod="csi-node-driver-p8pzw" WorkloadEndpoint="ci--4116.0.0--d--2a8a4ec573-k8s-csi--node--driver--p8pzw-" Oct 9 01:01:41.228375 containerd[1474]: 2024-10-09 01:01:41.057 [INFO][3922] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b55ce21a95139a92ec4a09f9708447497842afad790a81f73ddc92127596b15c" Namespace="calico-system" Pod="csi-node-driver-p8pzw" WorkloadEndpoint="ci--4116.0.0--d--2a8a4ec573-k8s-csi--node--driver--p8pzw-eth0" Oct 9 01:01:41.228375 containerd[1474]: 2024-10-09 01:01:41.101 [INFO][3937] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b55ce21a95139a92ec4a09f9708447497842afad790a81f73ddc92127596b15c" HandleID="k8s-pod-network.b55ce21a95139a92ec4a09f9708447497842afad790a81f73ddc92127596b15c" Workload="ci--4116.0.0--d--2a8a4ec573-k8s-csi--node--driver--p8pzw-eth0" Oct 9 01:01:41.228375 containerd[1474]: 2024-10-09 01:01:41.113 [INFO][3937] ipam_plugin.go 270: Auto assigning IP ContainerID="b55ce21a95139a92ec4a09f9708447497842afad790a81f73ddc92127596b15c" HandleID="k8s-pod-network.b55ce21a95139a92ec4a09f9708447497842afad790a81f73ddc92127596b15c" Workload="ci--4116.0.0--d--2a8a4ec573-k8s-csi--node--driver--p8pzw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000116a10), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4116.0.0-d-2a8a4ec573", "pod":"csi-node-driver-p8pzw", "timestamp":"2024-10-09 01:01:41.101848457 +0000 UTC"}, Hostname:"ci-4116.0.0-d-2a8a4ec573", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 01:01:41.228375 containerd[1474]: 2024-10-09 01:01:41.113 [INFO][3937] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:01:41.228375 containerd[1474]: 2024-10-09 01:01:41.113 [INFO][3937] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:01:41.228375 containerd[1474]: 2024-10-09 01:01:41.113 [INFO][3937] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4116.0.0-d-2a8a4ec573' Oct 9 01:01:41.228375 containerd[1474]: 2024-10-09 01:01:41.118 [INFO][3937] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b55ce21a95139a92ec4a09f9708447497842afad790a81f73ddc92127596b15c" host="ci-4116.0.0-d-2a8a4ec573" Oct 9 01:01:41.228375 containerd[1474]: 2024-10-09 01:01:41.130 [INFO][3937] ipam.go 372: Looking up existing affinities for host host="ci-4116.0.0-d-2a8a4ec573" Oct 9 01:01:41.228375 containerd[1474]: 2024-10-09 01:01:41.143 [INFO][3937] ipam.go 489: Trying affinity for 192.168.13.64/26 host="ci-4116.0.0-d-2a8a4ec573" Oct 9 01:01:41.228375 containerd[1474]: 2024-10-09 01:01:41.146 [INFO][3937] ipam.go 155: Attempting to load block cidr=192.168.13.64/26 host="ci-4116.0.0-d-2a8a4ec573" Oct 9 01:01:41.228375 containerd[1474]: 2024-10-09 01:01:41.151 [INFO][3937] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.13.64/26 host="ci-4116.0.0-d-2a8a4ec573" Oct 9 01:01:41.228375 containerd[1474]: 2024-10-09 01:01:41.152 [INFO][3937] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.13.64/26 handle="k8s-pod-network.b55ce21a95139a92ec4a09f9708447497842afad790a81f73ddc92127596b15c" host="ci-4116.0.0-d-2a8a4ec573" Oct 9 01:01:41.228375 containerd[1474]: 2024-10-09 01:01:41.154 [INFO][3937] ipam.go 1685: Creating new handle: k8s-pod-network.b55ce21a95139a92ec4a09f9708447497842afad790a81f73ddc92127596b15c Oct 9 01:01:41.228375 containerd[1474]: 2024-10-09 01:01:41.161 [INFO][3937] ipam.go 1203: Writing block in order to claim IPs block=192.168.13.64/26 handle="k8s-pod-network.b55ce21a95139a92ec4a09f9708447497842afad790a81f73ddc92127596b15c" host="ci-4116.0.0-d-2a8a4ec573" Oct 9 01:01:41.228375 containerd[1474]: 2024-10-09 01:01:41.176 [INFO][3937] ipam.go 1216: Successfully claimed IPs: [192.168.13.65/26] block=192.168.13.64/26 handle="k8s-pod-network.b55ce21a95139a92ec4a09f9708447497842afad790a81f73ddc92127596b15c" host="ci-4116.0.0-d-2a8a4ec573" Oct 9 01:01:41.228375 containerd[1474]: 2024-10-09 01:01:41.176 [INFO][3937] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.13.65/26] handle="k8s-pod-network.b55ce21a95139a92ec4a09f9708447497842afad790a81f73ddc92127596b15c" host="ci-4116.0.0-d-2a8a4ec573" Oct 9 01:01:41.228375 containerd[1474]: 2024-10-09 01:01:41.176 [INFO][3937] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:01:41.228375 containerd[1474]: 2024-10-09 01:01:41.176 [INFO][3937] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.13.65/26] IPv6=[] ContainerID="b55ce21a95139a92ec4a09f9708447497842afad790a81f73ddc92127596b15c" HandleID="k8s-pod-network.b55ce21a95139a92ec4a09f9708447497842afad790a81f73ddc92127596b15c" Workload="ci--4116.0.0--d--2a8a4ec573-k8s-csi--node--driver--p8pzw-eth0" Oct 9 01:01:41.233697 containerd[1474]: 2024-10-09 01:01:41.180 [INFO][3922] k8s.go 386: Populated endpoint ContainerID="b55ce21a95139a92ec4a09f9708447497842afad790a81f73ddc92127596b15c" Namespace="calico-system" Pod="csi-node-driver-p8pzw" WorkloadEndpoint="ci--4116.0.0--d--2a8a4ec573-k8s-csi--node--driver--p8pzw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116.0.0--d--2a8a4ec573-k8s-csi--node--driver--p8pzw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e8f89e29-cfb4-4b5e-a7cb-66ed0f9162bb", ResourceVersion:"710", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 1, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"779867c8f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116.0.0-d-2a8a4ec573", ContainerID:"", Pod:"csi-node-driver-p8pzw", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.13.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali78ccef8953d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:01:41.233697 containerd[1474]: 2024-10-09 01:01:41.180 [INFO][3922] k8s.go 387: Calico CNI using IPs: [192.168.13.65/32] ContainerID="b55ce21a95139a92ec4a09f9708447497842afad790a81f73ddc92127596b15c" Namespace="calico-system" Pod="csi-node-driver-p8pzw" WorkloadEndpoint="ci--4116.0.0--d--2a8a4ec573-k8s-csi--node--driver--p8pzw-eth0" Oct 9 01:01:41.233697 containerd[1474]: 2024-10-09 01:01:41.180 [INFO][3922] dataplane_linux.go 68: Setting the host side veth name to cali78ccef8953d ContainerID="b55ce21a95139a92ec4a09f9708447497842afad790a81f73ddc92127596b15c" Namespace="calico-system" Pod="csi-node-driver-p8pzw" WorkloadEndpoint="ci--4116.0.0--d--2a8a4ec573-k8s-csi--node--driver--p8pzw-eth0" Oct 9 01:01:41.233697 containerd[1474]: 2024-10-09 01:01:41.200 [INFO][3922] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="b55ce21a95139a92ec4a09f9708447497842afad790a81f73ddc92127596b15c" Namespace="calico-system" Pod="csi-node-driver-p8pzw" WorkloadEndpoint="ci--4116.0.0--d--2a8a4ec573-k8s-csi--node--driver--p8pzw-eth0" Oct 9 01:01:41.233697 containerd[1474]: 2024-10-09 01:01:41.202 [INFO][3922] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b55ce21a95139a92ec4a09f9708447497842afad790a81f73ddc92127596b15c" Namespace="calico-system" Pod="csi-node-driver-p8pzw" WorkloadEndpoint="ci--4116.0.0--d--2a8a4ec573-k8s-csi--node--driver--p8pzw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116.0.0--d--2a8a4ec573-k8s-csi--node--driver--p8pzw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e8f89e29-cfb4-4b5e-a7cb-66ed0f9162bb", ResourceVersion:"710", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 1, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"779867c8f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116.0.0-d-2a8a4ec573", ContainerID:"b55ce21a95139a92ec4a09f9708447497842afad790a81f73ddc92127596b15c", Pod:"csi-node-driver-p8pzw", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.13.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali78ccef8953d", MAC:"8e:a7:3c:a5:ba:b1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:01:41.233697 containerd[1474]: 2024-10-09 01:01:41.221 [INFO][3922] k8s.go 500: Wrote updated endpoint to datastore ContainerID="b55ce21a95139a92ec4a09f9708447497842afad790a81f73ddc92127596b15c" Namespace="calico-system" Pod="csi-node-driver-p8pzw" WorkloadEndpoint="ci--4116.0.0--d--2a8a4ec573-k8s-csi--node--driver--p8pzw-eth0" Oct 9 01:01:41.280481 kernel: bpftool[3979]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Oct 9 01:01:41.283188 containerd[1474]: time="2024-10-09T01:01:41.282768024Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:01:41.283610 containerd[1474]: time="2024-10-09T01:01:41.283122444Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:01:41.283610 containerd[1474]: time="2024-10-09T01:01:41.283141016Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:01:41.283764 containerd[1474]: time="2024-10-09T01:01:41.283511315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:01:41.323981 systemd[1]: run-containerd-runc-k8s.io-b55ce21a95139a92ec4a09f9708447497842afad790a81f73ddc92127596b15c-runc.l3COPH.mount: Deactivated successfully. Oct 9 01:01:41.335781 systemd[1]: Started cri-containerd-b55ce21a95139a92ec4a09f9708447497842afad790a81f73ddc92127596b15c.scope - libcontainer container b55ce21a95139a92ec4a09f9708447497842afad790a81f73ddc92127596b15c. Oct 9 01:01:41.411864 containerd[1474]: time="2024-10-09T01:01:41.411816783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-p8pzw,Uid:e8f89e29-cfb4-4b5e-a7cb-66ed0f9162bb,Namespace:calico-system,Attempt:1,} returns sandbox id \"b55ce21a95139a92ec4a09f9708447497842afad790a81f73ddc92127596b15c\"" Oct 9 01:01:41.428866 containerd[1474]: time="2024-10-09T01:01:41.428812812Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\"" Oct 9 01:01:41.602124 containerd[1474]: time="2024-10-09T01:01:41.601697492Z" level=info msg="StopPodSandbox for \"6e7c3dec46c9feef948bdd48d45fb2c074d33970aa4adfa01666c39fcdf232b9\"" Oct 9 01:01:41.739465 containerd[1474]: 2024-10-09 01:01:41.687 [INFO][4046] k8s.go 608: Cleaning up netns ContainerID="6e7c3dec46c9feef948bdd48d45fb2c074d33970aa4adfa01666c39fcdf232b9" Oct 9 01:01:41.739465 containerd[1474]: 2024-10-09 01:01:41.687 [INFO][4046] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="6e7c3dec46c9feef948bdd48d45fb2c074d33970aa4adfa01666c39fcdf232b9" iface="eth0" netns="/var/run/netns/cni-ce8aa204-648c-e08c-35ec-910dedb4f879" Oct 9 01:01:41.739465 containerd[1474]: 2024-10-09 01:01:41.688 [INFO][4046] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="6e7c3dec46c9feef948bdd48d45fb2c074d33970aa4adfa01666c39fcdf232b9" iface="eth0" netns="/var/run/netns/cni-ce8aa204-648c-e08c-35ec-910dedb4f879" Oct 9 01:01:41.739465 containerd[1474]: 2024-10-09 01:01:41.688 [INFO][4046] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="6e7c3dec46c9feef948bdd48d45fb2c074d33970aa4adfa01666c39fcdf232b9" iface="eth0" netns="/var/run/netns/cni-ce8aa204-648c-e08c-35ec-910dedb4f879" Oct 9 01:01:41.739465 containerd[1474]: 2024-10-09 01:01:41.688 [INFO][4046] k8s.go 615: Releasing IP address(es) ContainerID="6e7c3dec46c9feef948bdd48d45fb2c074d33970aa4adfa01666c39fcdf232b9" Oct 9 01:01:41.739465 containerd[1474]: 2024-10-09 01:01:41.688 [INFO][4046] utils.go 188: Calico CNI releasing IP address ContainerID="6e7c3dec46c9feef948bdd48d45fb2c074d33970aa4adfa01666c39fcdf232b9" Oct 9 01:01:41.739465 containerd[1474]: 2024-10-09 01:01:41.724 [INFO][4068] ipam_plugin.go 417: Releasing address using handleID ContainerID="6e7c3dec46c9feef948bdd48d45fb2c074d33970aa4adfa01666c39fcdf232b9" HandleID="k8s-pod-network.6e7c3dec46c9feef948bdd48d45fb2c074d33970aa4adfa01666c39fcdf232b9" Workload="ci--4116.0.0--d--2a8a4ec573-k8s-coredns--6f6b679f8f--d95gg-eth0" Oct 9 01:01:41.739465 containerd[1474]: 2024-10-09 01:01:41.724 [INFO][4068] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:01:41.739465 containerd[1474]: 2024-10-09 01:01:41.724 [INFO][4068] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:01:41.739465 containerd[1474]: 2024-10-09 01:01:41.733 [WARNING][4068] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="6e7c3dec46c9feef948bdd48d45fb2c074d33970aa4adfa01666c39fcdf232b9" HandleID="k8s-pod-network.6e7c3dec46c9feef948bdd48d45fb2c074d33970aa4adfa01666c39fcdf232b9" Workload="ci--4116.0.0--d--2a8a4ec573-k8s-coredns--6f6b679f8f--d95gg-eth0" Oct 9 01:01:41.739465 containerd[1474]: 2024-10-09 01:01:41.733 [INFO][4068] ipam_plugin.go 445: Releasing address using workloadID ContainerID="6e7c3dec46c9feef948bdd48d45fb2c074d33970aa4adfa01666c39fcdf232b9" HandleID="k8s-pod-network.6e7c3dec46c9feef948bdd48d45fb2c074d33970aa4adfa01666c39fcdf232b9" Workload="ci--4116.0.0--d--2a8a4ec573-k8s-coredns--6f6b679f8f--d95gg-eth0" Oct 9 01:01:41.739465 containerd[1474]: 2024-10-09 01:01:41.735 [INFO][4068] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:01:41.739465 containerd[1474]: 2024-10-09 01:01:41.737 [INFO][4046] k8s.go 621: Teardown processing complete. ContainerID="6e7c3dec46c9feef948bdd48d45fb2c074d33970aa4adfa01666c39fcdf232b9" Oct 9 01:01:41.740688 containerd[1474]: time="2024-10-09T01:01:41.740379616Z" level=info msg="TearDown network for sandbox \"6e7c3dec46c9feef948bdd48d45fb2c074d33970aa4adfa01666c39fcdf232b9\" successfully" Oct 9 01:01:41.740688 containerd[1474]: time="2024-10-09T01:01:41.740418473Z" level=info msg="StopPodSandbox for \"6e7c3dec46c9feef948bdd48d45fb2c074d33970aa4adfa01666c39fcdf232b9\" returns successfully" Oct 9 01:01:41.741149 kubelet[2577]: E1009 01:01:41.741010 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 01:01:41.743024 containerd[1474]: time="2024-10-09T01:01:41.741424949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-d95gg,Uid:104dbbd7-e31c-46d2-8ae4-2ce3a9ced8ae,Namespace:kube-system,Attempt:1,}" Oct 9 01:01:41.904213 systemd[1]: run-netns-cni\x2dce8aa204\x2d648c\x2de08c\x2d35ec\x2d910dedb4f879.mount: Deactivated successfully. Oct 9 01:01:42.035594 systemd-networkd[1378]: calicf0d090d3d6: Link UP Oct 9 01:01:42.035986 systemd-networkd[1378]: calicf0d090d3d6: Gained carrier Oct 9 01:01:42.070741 containerd[1474]: 2024-10-09 01:01:41.870 [INFO][4080] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4116.0.0--d--2a8a4ec573-k8s-coredns--6f6b679f8f--d95gg-eth0 coredns-6f6b679f8f- kube-system 104dbbd7-e31c-46d2-8ae4-2ce3a9ced8ae 718 0 2024-10-09 01:01:03 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4116.0.0-d-2a8a4ec573 coredns-6f6b679f8f-d95gg eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calicf0d090d3d6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="7f9740f19e231205dc04574abcfd166a1429070af1fbc01c7d4032978d200ceb" Namespace="kube-system" Pod="coredns-6f6b679f8f-d95gg" WorkloadEndpoint="ci--4116.0.0--d--2a8a4ec573-k8s-coredns--6f6b679f8f--d95gg-" Oct 9 01:01:42.070741 containerd[1474]: 2024-10-09 01:01:41.870 [INFO][4080] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7f9740f19e231205dc04574abcfd166a1429070af1fbc01c7d4032978d200ceb" Namespace="kube-system" Pod="coredns-6f6b679f8f-d95gg" WorkloadEndpoint="ci--4116.0.0--d--2a8a4ec573-k8s-coredns--6f6b679f8f--d95gg-eth0" Oct 9 01:01:42.070741 containerd[1474]: 2024-10-09 01:01:41.941 [INFO][4090] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7f9740f19e231205dc04574abcfd166a1429070af1fbc01c7d4032978d200ceb" HandleID="k8s-pod-network.7f9740f19e231205dc04574abcfd166a1429070af1fbc01c7d4032978d200ceb" Workload="ci--4116.0.0--d--2a8a4ec573-k8s-coredns--6f6b679f8f--d95gg-eth0" Oct 9 01:01:42.070741 containerd[1474]: 2024-10-09 01:01:41.957 [INFO][4090] ipam_plugin.go 270: Auto assigning IP ContainerID="7f9740f19e231205dc04574abcfd166a1429070af1fbc01c7d4032978d200ceb" HandleID="k8s-pod-network.7f9740f19e231205dc04574abcfd166a1429070af1fbc01c7d4032978d200ceb" Workload="ci--4116.0.0--d--2a8a4ec573-k8s-coredns--6f6b679f8f--d95gg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00026cec0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4116.0.0-d-2a8a4ec573", "pod":"coredns-6f6b679f8f-d95gg", "timestamp":"2024-10-09 01:01:41.941467749 +0000 UTC"}, Hostname:"ci-4116.0.0-d-2a8a4ec573", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 01:01:42.070741 containerd[1474]: 2024-10-09 01:01:41.958 [INFO][4090] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:01:42.070741 containerd[1474]: 2024-10-09 01:01:41.958 [INFO][4090] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:01:42.070741 containerd[1474]: 2024-10-09 01:01:41.958 [INFO][4090] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4116.0.0-d-2a8a4ec573' Oct 9 01:01:42.070741 containerd[1474]: 2024-10-09 01:01:41.963 [INFO][4090] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7f9740f19e231205dc04574abcfd166a1429070af1fbc01c7d4032978d200ceb" host="ci-4116.0.0-d-2a8a4ec573" Oct 9 01:01:42.070741 containerd[1474]: 2024-10-09 01:01:41.971 [INFO][4090] ipam.go 372: Looking up existing affinities for host host="ci-4116.0.0-d-2a8a4ec573" Oct 9 01:01:42.070741 containerd[1474]: 2024-10-09 01:01:41.982 [INFO][4090] ipam.go 489: Trying affinity for 192.168.13.64/26 host="ci-4116.0.0-d-2a8a4ec573" Oct 9 01:01:42.070741 containerd[1474]: 2024-10-09 01:01:41.988 [INFO][4090] ipam.go 155: Attempting to load block cidr=192.168.13.64/26 host="ci-4116.0.0-d-2a8a4ec573" Oct 9 01:01:42.070741 containerd[1474]: 2024-10-09 01:01:41.994 [INFO][4090] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.13.64/26 host="ci-4116.0.0-d-2a8a4ec573" Oct 9 01:01:42.070741 containerd[1474]: 2024-10-09 01:01:41.994 [INFO][4090] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.13.64/26 handle="k8s-pod-network.7f9740f19e231205dc04574abcfd166a1429070af1fbc01c7d4032978d200ceb" host="ci-4116.0.0-d-2a8a4ec573" Oct 9 01:01:42.070741 containerd[1474]: 2024-10-09 01:01:41.998 [INFO][4090] ipam.go 1685: Creating new handle: k8s-pod-network.7f9740f19e231205dc04574abcfd166a1429070af1fbc01c7d4032978d200ceb Oct 9 01:01:42.070741 containerd[1474]: 2024-10-09 01:01:42.006 [INFO][4090] ipam.go 1203: Writing block in order to claim IPs block=192.168.13.64/26 handle="k8s-pod-network.7f9740f19e231205dc04574abcfd166a1429070af1fbc01c7d4032978d200ceb" host="ci-4116.0.0-d-2a8a4ec573" Oct 9 01:01:42.070741 containerd[1474]: 2024-10-09 01:01:42.019 [INFO][4090] ipam.go 1216: Successfully claimed IPs: [192.168.13.66/26] block=192.168.13.64/26 handle="k8s-pod-network.7f9740f19e231205dc04574abcfd166a1429070af1fbc01c7d4032978d200ceb" host="ci-4116.0.0-d-2a8a4ec573" Oct 9 01:01:42.070741 containerd[1474]: 2024-10-09 01:01:42.019 [INFO][4090] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.13.66/26] handle="k8s-pod-network.7f9740f19e231205dc04574abcfd166a1429070af1fbc01c7d4032978d200ceb" host="ci-4116.0.0-d-2a8a4ec573" Oct 9 01:01:42.070741 containerd[1474]: 2024-10-09 01:01:42.019 [INFO][4090] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:01:42.070741 containerd[1474]: 2024-10-09 01:01:42.019 [INFO][4090] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.13.66/26] IPv6=[] ContainerID="7f9740f19e231205dc04574abcfd166a1429070af1fbc01c7d4032978d200ceb" HandleID="k8s-pod-network.7f9740f19e231205dc04574abcfd166a1429070af1fbc01c7d4032978d200ceb" Workload="ci--4116.0.0--d--2a8a4ec573-k8s-coredns--6f6b679f8f--d95gg-eth0" Oct 9 01:01:42.073116 containerd[1474]: 2024-10-09 01:01:42.024 [INFO][4080] k8s.go 386: Populated endpoint ContainerID="7f9740f19e231205dc04574abcfd166a1429070af1fbc01c7d4032978d200ceb" Namespace="kube-system" Pod="coredns-6f6b679f8f-d95gg" WorkloadEndpoint="ci--4116.0.0--d--2a8a4ec573-k8s-coredns--6f6b679f8f--d95gg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116.0.0--d--2a8a4ec573-k8s-coredns--6f6b679f8f--d95gg-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"104dbbd7-e31c-46d2-8ae4-2ce3a9ced8ae", ResourceVersion:"718", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 1, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116.0.0-d-2a8a4ec573", ContainerID:"", Pod:"coredns-6f6b679f8f-d95gg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.13.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicf0d090d3d6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:01:42.073116 containerd[1474]: 2024-10-09 01:01:42.024 [INFO][4080] k8s.go 387: Calico CNI using IPs: [192.168.13.66/32] ContainerID="7f9740f19e231205dc04574abcfd166a1429070af1fbc01c7d4032978d200ceb" Namespace="kube-system" Pod="coredns-6f6b679f8f-d95gg" WorkloadEndpoint="ci--4116.0.0--d--2a8a4ec573-k8s-coredns--6f6b679f8f--d95gg-eth0" Oct 9 01:01:42.073116 containerd[1474]: 2024-10-09 01:01:42.025 [INFO][4080] dataplane_linux.go 68: Setting the host side veth name to calicf0d090d3d6 ContainerID="7f9740f19e231205dc04574abcfd166a1429070af1fbc01c7d4032978d200ceb" Namespace="kube-system" Pod="coredns-6f6b679f8f-d95gg" WorkloadEndpoint="ci--4116.0.0--d--2a8a4ec573-k8s-coredns--6f6b679f8f--d95gg-eth0" Oct 9 01:01:42.073116 containerd[1474]: 2024-10-09 01:01:42.041 [INFO][4080] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="7f9740f19e231205dc04574abcfd166a1429070af1fbc01c7d4032978d200ceb" Namespace="kube-system" Pod="coredns-6f6b679f8f-d95gg" WorkloadEndpoint="ci--4116.0.0--d--2a8a4ec573-k8s-coredns--6f6b679f8f--d95gg-eth0" Oct 9 01:01:42.073116 containerd[1474]: 2024-10-09 01:01:42.043 [INFO][4080] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7f9740f19e231205dc04574abcfd166a1429070af1fbc01c7d4032978d200ceb" Namespace="kube-system" Pod="coredns-6f6b679f8f-d95gg" WorkloadEndpoint="ci--4116.0.0--d--2a8a4ec573-k8s-coredns--6f6b679f8f--d95gg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116.0.0--d--2a8a4ec573-k8s-coredns--6f6b679f8f--d95gg-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"104dbbd7-e31c-46d2-8ae4-2ce3a9ced8ae", ResourceVersion:"718", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 1, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116.0.0-d-2a8a4ec573", ContainerID:"7f9740f19e231205dc04574abcfd166a1429070af1fbc01c7d4032978d200ceb", Pod:"coredns-6f6b679f8f-d95gg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.13.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicf0d090d3d6", MAC:"2a:5f:83:dd:d2:bb", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:01:42.073116 containerd[1474]: 2024-10-09 01:01:42.065 [INFO][4080] k8s.go 500: Wrote updated endpoint to datastore ContainerID="7f9740f19e231205dc04574abcfd166a1429070af1fbc01c7d4032978d200ceb" Namespace="kube-system" Pod="coredns-6f6b679f8f-d95gg" WorkloadEndpoint="ci--4116.0.0--d--2a8a4ec573-k8s-coredns--6f6b679f8f--d95gg-eth0" Oct 9 01:01:42.153534 containerd[1474]: time="2024-10-09T01:01:42.152069470Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:01:42.153534 containerd[1474]: time="2024-10-09T01:01:42.152198595Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:01:42.153534 containerd[1474]: time="2024-10-09T01:01:42.152220559Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:01:42.153534 containerd[1474]: time="2024-10-09T01:01:42.152410245Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:01:42.188721 systemd-networkd[1378]: vxlan.calico: Link UP Oct 9 01:01:42.188987 systemd-networkd[1378]: vxlan.calico: Gained carrier Oct 9 01:01:42.226465 systemd[1]: run-containerd-runc-k8s.io-7f9740f19e231205dc04574abcfd166a1429070af1fbc01c7d4032978d200ceb-runc.wVgSfo.mount: Deactivated successfully. Oct 9 01:01:42.248711 systemd[1]: Started cri-containerd-7f9740f19e231205dc04574abcfd166a1429070af1fbc01c7d4032978d200ceb.scope - libcontainer container 7f9740f19e231205dc04574abcfd166a1429070af1fbc01c7d4032978d200ceb. Oct 9 01:01:42.254727 systemd-networkd[1378]: cali78ccef8953d: Gained IPv6LL Oct 9 01:01:42.322864 containerd[1474]: time="2024-10-09T01:01:42.322671866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-d95gg,Uid:104dbbd7-e31c-46d2-8ae4-2ce3a9ced8ae,Namespace:kube-system,Attempt:1,} returns sandbox id \"7f9740f19e231205dc04574abcfd166a1429070af1fbc01c7d4032978d200ceb\"" Oct 9 01:01:42.325130 kubelet[2577]: E1009 01:01:42.324687 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 01:01:42.331620 containerd[1474]: time="2024-10-09T01:01:42.331495173Z" level=info msg="CreateContainer within sandbox \"7f9740f19e231205dc04574abcfd166a1429070af1fbc01c7d4032978d200ceb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 9 01:01:42.383702 containerd[1474]: time="2024-10-09T01:01:42.383576901Z" level=info msg="CreateContainer within sandbox \"7f9740f19e231205dc04574abcfd166a1429070af1fbc01c7d4032978d200ceb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cdfee33e4a9745c802ab62f7f1da5658c6b519d35251f79cdc52c87d571b5838\"" Oct 9 01:01:42.386819 containerd[1474]: time="2024-10-09T01:01:42.385239136Z" level=info msg="StartContainer for \"cdfee33e4a9745c802ab62f7f1da5658c6b519d35251f79cdc52c87d571b5838\"" Oct 9 01:01:42.440695 systemd[1]: Started cri-containerd-cdfee33e4a9745c802ab62f7f1da5658c6b519d35251f79cdc52c87d571b5838.scope - libcontainer container cdfee33e4a9745c802ab62f7f1da5658c6b519d35251f79cdc52c87d571b5838. Oct 9 01:01:42.490940 containerd[1474]: time="2024-10-09T01:01:42.490715882Z" level=info msg="StartContainer for \"cdfee33e4a9745c802ab62f7f1da5658c6b519d35251f79cdc52c87d571b5838\" returns successfully" Oct 9 01:01:42.905242 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4023647350.mount: Deactivated successfully. Oct 9 01:01:42.947378 kubelet[2577]: E1009 01:01:42.946931 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 01:01:42.975727 kubelet[2577]: I1009 01:01:42.974686 2577 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-d95gg" podStartSLOduration=39.965912387 podStartE2EDuration="39.965912387s" podCreationTimestamp="2024-10-09 01:01:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 01:01:42.965391933 +0000 UTC m=+44.499225461" watchObservedRunningTime="2024-10-09 01:01:42.965912387 +0000 UTC m=+44.499745927" Oct 9 01:01:43.003973 containerd[1474]: time="2024-10-09T01:01:43.003695229Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:01:43.007389 containerd[1474]: time="2024-10-09T01:01:43.007269604Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.1: active requests=0, bytes read=7642081" Oct 9 01:01:43.010737 containerd[1474]: time="2024-10-09T01:01:43.010642396Z" level=info msg="ImageCreate event name:\"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:01:43.014311 containerd[1474]: time="2024-10-09T01:01:43.014248234Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:01:43.017452 containerd[1474]: time="2024-10-09T01:01:43.015864084Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.1\" with image id \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\", size \"9134482\" in 1.586997827s" Oct 9 01:01:43.017452 containerd[1474]: time="2024-10-09T01:01:43.015928335Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\" returns image reference \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\"" Oct 9 01:01:43.040074 containerd[1474]: time="2024-10-09T01:01:43.040005731Z" level=info msg="CreateContainer within sandbox \"b55ce21a95139a92ec4a09f9708447497842afad790a81f73ddc92127596b15c\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Oct 9 01:01:43.123674 containerd[1474]: time="2024-10-09T01:01:43.123594492Z" level=info msg="CreateContainer within sandbox \"b55ce21a95139a92ec4a09f9708447497842afad790a81f73ddc92127596b15c\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"c518a138c02e08221c68cc63ebf1388cf08cd453c611de45a8dd606769ab7d0e\"" Oct 9 01:01:43.124877 containerd[1474]: time="2024-10-09T01:01:43.124742351Z" level=info msg="StartContainer for \"c518a138c02e08221c68cc63ebf1388cf08cd453c611de45a8dd606769ab7d0e\"" Oct 9 01:01:43.188777 systemd[1]: Started cri-containerd-c518a138c02e08221c68cc63ebf1388cf08cd453c611de45a8dd606769ab7d0e.scope - libcontainer container c518a138c02e08221c68cc63ebf1388cf08cd453c611de45a8dd606769ab7d0e. Oct 9 01:01:43.246630 containerd[1474]: time="2024-10-09T01:01:43.246458415Z" level=info msg="StartContainer for \"c518a138c02e08221c68cc63ebf1388cf08cd453c611de45a8dd606769ab7d0e\" returns successfully" Oct 9 01:01:43.252547 containerd[1474]: time="2024-10-09T01:01:43.251035646Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\"" Oct 9 01:01:43.342695 systemd-networkd[1378]: vxlan.calico: Gained IPv6LL Oct 9 01:01:43.602565 containerd[1474]: time="2024-10-09T01:01:43.602040752Z" level=info msg="StopPodSandbox for \"878f1c5b627ec68e440fed59fff778ab9002a0799f2ad44383463214bcb08686\"" Oct 9 01:01:43.602565 containerd[1474]: time="2024-10-09T01:01:43.602154086Z" level=info msg="StopPodSandbox for \"98b08d0e294dc569debc0a079cdd08bc4658684d5776dc9a5c7210ad6ddf7f31\"" Oct 9 01:01:43.663143 systemd-networkd[1378]: calicf0d090d3d6: Gained IPv6LL Oct 9 01:01:43.784404 containerd[1474]: 2024-10-09 01:01:43.709 [INFO][4321] k8s.go 608: Cleaning up netns ContainerID="878f1c5b627ec68e440fed59fff778ab9002a0799f2ad44383463214bcb08686" Oct 9 01:01:43.784404 containerd[1474]: 2024-10-09 01:01:43.709 [INFO][4321] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="878f1c5b627ec68e440fed59fff778ab9002a0799f2ad44383463214bcb08686" iface="eth0" netns="/var/run/netns/cni-8043ba07-392c-b92f-c4b4-a6d1dd0d21cd" Oct 9 01:01:43.784404 containerd[1474]: 2024-10-09 01:01:43.710 [INFO][4321] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="878f1c5b627ec68e440fed59fff778ab9002a0799f2ad44383463214bcb08686" iface="eth0" netns="/var/run/netns/cni-8043ba07-392c-b92f-c4b4-a6d1dd0d21cd" Oct 9 01:01:43.784404 containerd[1474]: 2024-10-09 01:01:43.711 [INFO][4321] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="878f1c5b627ec68e440fed59fff778ab9002a0799f2ad44383463214bcb08686" iface="eth0" netns="/var/run/netns/cni-8043ba07-392c-b92f-c4b4-a6d1dd0d21cd" Oct 9 01:01:43.784404 containerd[1474]: 2024-10-09 01:01:43.711 [INFO][4321] k8s.go 615: Releasing IP address(es) ContainerID="878f1c5b627ec68e440fed59fff778ab9002a0799f2ad44383463214bcb08686" Oct 9 01:01:43.784404 containerd[1474]: 2024-10-09 01:01:43.711 [INFO][4321] utils.go 188: Calico CNI releasing IP address ContainerID="878f1c5b627ec68e440fed59fff778ab9002a0799f2ad44383463214bcb08686" Oct 9 01:01:43.784404 containerd[1474]: 2024-10-09 01:01:43.758 [INFO][4334] ipam_plugin.go 417: Releasing address using handleID ContainerID="878f1c5b627ec68e440fed59fff778ab9002a0799f2ad44383463214bcb08686" HandleID="k8s-pod-network.878f1c5b627ec68e440fed59fff778ab9002a0799f2ad44383463214bcb08686" Workload="ci--4116.0.0--d--2a8a4ec573-k8s-calico--kube--controllers--5d99d68f9d--swq27-eth0" Oct 9 01:01:43.784404 containerd[1474]: 2024-10-09 01:01:43.758 [INFO][4334] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:01:43.784404 containerd[1474]: 2024-10-09 01:01:43.758 [INFO][4334] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:01:43.784404 containerd[1474]: 2024-10-09 01:01:43.768 [WARNING][4334] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="878f1c5b627ec68e440fed59fff778ab9002a0799f2ad44383463214bcb08686" HandleID="k8s-pod-network.878f1c5b627ec68e440fed59fff778ab9002a0799f2ad44383463214bcb08686" Workload="ci--4116.0.0--d--2a8a4ec573-k8s-calico--kube--controllers--5d99d68f9d--swq27-eth0" Oct 9 01:01:43.784404 containerd[1474]: 2024-10-09 01:01:43.768 [INFO][4334] ipam_plugin.go 445: Releasing address using workloadID ContainerID="878f1c5b627ec68e440fed59fff778ab9002a0799f2ad44383463214bcb08686" HandleID="k8s-pod-network.878f1c5b627ec68e440fed59fff778ab9002a0799f2ad44383463214bcb08686" Workload="ci--4116.0.0--d--2a8a4ec573-k8s-calico--kube--controllers--5d99d68f9d--swq27-eth0" Oct 9 01:01:43.784404 containerd[1474]: 2024-10-09 01:01:43.777 [INFO][4334] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:01:43.784404 containerd[1474]: 2024-10-09 01:01:43.780 [INFO][4321] k8s.go 621: Teardown processing complete. ContainerID="878f1c5b627ec68e440fed59fff778ab9002a0799f2ad44383463214bcb08686" Oct 9 01:01:43.788023 containerd[1474]: time="2024-10-09T01:01:43.784714525Z" level=info msg="TearDown network for sandbox \"878f1c5b627ec68e440fed59fff778ab9002a0799f2ad44383463214bcb08686\" successfully" Oct 9 01:01:43.788023 containerd[1474]: time="2024-10-09T01:01:43.784744760Z" level=info msg="StopPodSandbox for \"878f1c5b627ec68e440fed59fff778ab9002a0799f2ad44383463214bcb08686\" returns successfully" Oct 9 01:01:43.788023 containerd[1474]: time="2024-10-09T01:01:43.787374685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d99d68f9d-swq27,Uid:c458a646-0669-47f2-97fc-a34bf29c9bc5,Namespace:calico-system,Attempt:1,}" Oct 9 01:01:43.807254 containerd[1474]: 2024-10-09 01:01:43.731 [INFO][4320] k8s.go 608: Cleaning up netns ContainerID="98b08d0e294dc569debc0a079cdd08bc4658684d5776dc9a5c7210ad6ddf7f31" Oct 9 01:01:43.807254 containerd[1474]: 2024-10-09 01:01:43.731 [INFO][4320] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="98b08d0e294dc569debc0a079cdd08bc4658684d5776dc9a5c7210ad6ddf7f31" iface="eth0" netns="/var/run/netns/cni-578838c6-f958-ce31-5657-5aa3afe688fe" Oct 9 01:01:43.807254 containerd[1474]: 2024-10-09 01:01:43.733 [INFO][4320] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="98b08d0e294dc569debc0a079cdd08bc4658684d5776dc9a5c7210ad6ddf7f31" iface="eth0" netns="/var/run/netns/cni-578838c6-f958-ce31-5657-5aa3afe688fe" Oct 9 01:01:43.807254 containerd[1474]: 2024-10-09 01:01:43.733 [INFO][4320] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="98b08d0e294dc569debc0a079cdd08bc4658684d5776dc9a5c7210ad6ddf7f31" iface="eth0" netns="/var/run/netns/cni-578838c6-f958-ce31-5657-5aa3afe688fe" Oct 9 01:01:43.807254 containerd[1474]: 2024-10-09 01:01:43.733 [INFO][4320] k8s.go 615: Releasing IP address(es) ContainerID="98b08d0e294dc569debc0a079cdd08bc4658684d5776dc9a5c7210ad6ddf7f31" Oct 9 01:01:43.807254 containerd[1474]: 2024-10-09 01:01:43.733 [INFO][4320] utils.go 188: Calico CNI releasing IP address ContainerID="98b08d0e294dc569debc0a079cdd08bc4658684d5776dc9a5c7210ad6ddf7f31" Oct 9 01:01:43.807254 containerd[1474]: 2024-10-09 01:01:43.779 [INFO][4339] ipam_plugin.go 417: Releasing address using handleID ContainerID="98b08d0e294dc569debc0a079cdd08bc4658684d5776dc9a5c7210ad6ddf7f31" HandleID="k8s-pod-network.98b08d0e294dc569debc0a079cdd08bc4658684d5776dc9a5c7210ad6ddf7f31" Workload="ci--4116.0.0--d--2a8a4ec573-k8s-coredns--6f6b679f8f--f6s92-eth0" Oct 9 01:01:43.807254 containerd[1474]: 2024-10-09 01:01:43.779 [INFO][4339] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:01:43.807254 containerd[1474]: 2024-10-09 01:01:43.779 [INFO][4339] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:01:43.807254 containerd[1474]: 2024-10-09 01:01:43.795 [WARNING][4339] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="98b08d0e294dc569debc0a079cdd08bc4658684d5776dc9a5c7210ad6ddf7f31" HandleID="k8s-pod-network.98b08d0e294dc569debc0a079cdd08bc4658684d5776dc9a5c7210ad6ddf7f31" Workload="ci--4116.0.0--d--2a8a4ec573-k8s-coredns--6f6b679f8f--f6s92-eth0" Oct 9 01:01:43.807254 containerd[1474]: 2024-10-09 01:01:43.795 [INFO][4339] ipam_plugin.go 445: Releasing address using workloadID ContainerID="98b08d0e294dc569debc0a079cdd08bc4658684d5776dc9a5c7210ad6ddf7f31" HandleID="k8s-pod-network.98b08d0e294dc569debc0a079cdd08bc4658684d5776dc9a5c7210ad6ddf7f31" Workload="ci--4116.0.0--d--2a8a4ec573-k8s-coredns--6f6b679f8f--f6s92-eth0" Oct 9 01:01:43.807254 containerd[1474]: 2024-10-09 01:01:43.800 [INFO][4339] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:01:43.807254 containerd[1474]: 2024-10-09 01:01:43.803 [INFO][4320] k8s.go 621: Teardown processing complete. ContainerID="98b08d0e294dc569debc0a079cdd08bc4658684d5776dc9a5c7210ad6ddf7f31" Oct 9 01:01:43.808828 containerd[1474]: time="2024-10-09T01:01:43.807500164Z" level=info msg="TearDown network for sandbox \"98b08d0e294dc569debc0a079cdd08bc4658684d5776dc9a5c7210ad6ddf7f31\" successfully" Oct 9 01:01:43.808828 containerd[1474]: time="2024-10-09T01:01:43.807543290Z" level=info msg="StopPodSandbox for \"98b08d0e294dc569debc0a079cdd08bc4658684d5776dc9a5c7210ad6ddf7f31\" returns successfully" Oct 9 01:01:43.808938 kubelet[2577]: E1009 01:01:43.808352 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 01:01:43.811489 containerd[1474]: time="2024-10-09T01:01:43.811420402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-f6s92,Uid:79b0b543-5f0c-4dfb-9dc8-bfadb1e6489f,Namespace:kube-system,Attempt:1,}" Oct 9 01:01:43.913540 systemd[1]: run-netns-cni\x2d8043ba07\x2d392c\x2db92f\x2dc4b4\x2da6d1dd0d21cd.mount: Deactivated successfully. Oct 9 01:01:43.913704 systemd[1]: run-netns-cni\x2d578838c6\x2df958\x2dce31\x2d5657\x2d5aa3afe688fe.mount: Deactivated successfully. Oct 9 01:01:43.953721 kubelet[2577]: E1009 01:01:43.953686 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 01:01:44.117815 systemd-networkd[1378]: cali5d57f287b15: Link UP Oct 9 01:01:44.119716 systemd-networkd[1378]: cali5d57f287b15: Gained carrier Oct 9 01:01:44.151603 containerd[1474]: 2024-10-09 01:01:43.881 [INFO][4346] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4116.0.0--d--2a8a4ec573-k8s-calico--kube--controllers--5d99d68f9d--swq27-eth0 calico-kube-controllers-5d99d68f9d- calico-system c458a646-0669-47f2-97fc-a34bf29c9bc5 742 0 2024-10-09 01:01:17 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5d99d68f9d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4116.0.0-d-2a8a4ec573 calico-kube-controllers-5d99d68f9d-swq27 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali5d57f287b15 [] []}} ContainerID="25561639e7d2bcca69641b20081cdfceb9970693f919094aaddbb443a311ecf1" Namespace="calico-system" Pod="calico-kube-controllers-5d99d68f9d-swq27" WorkloadEndpoint="ci--4116.0.0--d--2a8a4ec573-k8s-calico--kube--controllers--5d99d68f9d--swq27-" Oct 9 01:01:44.151603 containerd[1474]: 2024-10-09 01:01:43.881 [INFO][4346] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="25561639e7d2bcca69641b20081cdfceb9970693f919094aaddbb443a311ecf1" Namespace="calico-system" Pod="calico-kube-controllers-5d99d68f9d-swq27" WorkloadEndpoint="ci--4116.0.0--d--2a8a4ec573-k8s-calico--kube--controllers--5d99d68f9d--swq27-eth0" Oct 9 01:01:44.151603 containerd[1474]: 2024-10-09 01:01:43.942 [INFO][4368] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="25561639e7d2bcca69641b20081cdfceb9970693f919094aaddbb443a311ecf1" HandleID="k8s-pod-network.25561639e7d2bcca69641b20081cdfceb9970693f919094aaddbb443a311ecf1" Workload="ci--4116.0.0--d--2a8a4ec573-k8s-calico--kube--controllers--5d99d68f9d--swq27-eth0" Oct 9 01:01:44.151603 containerd[1474]: 2024-10-09 01:01:44.059 [INFO][4368] ipam_plugin.go 270: Auto assigning IP ContainerID="25561639e7d2bcca69641b20081cdfceb9970693f919094aaddbb443a311ecf1" HandleID="k8s-pod-network.25561639e7d2bcca69641b20081cdfceb9970693f919094aaddbb443a311ecf1" Workload="ci--4116.0.0--d--2a8a4ec573-k8s-calico--kube--controllers--5d99d68f9d--swq27-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000114a10), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4116.0.0-d-2a8a4ec573", "pod":"calico-kube-controllers-5d99d68f9d-swq27", "timestamp":"2024-10-09 01:01:43.942069808 +0000 UTC"}, Hostname:"ci-4116.0.0-d-2a8a4ec573", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 01:01:44.151603 containerd[1474]: 2024-10-09 01:01:44.059 [INFO][4368] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:01:44.151603 containerd[1474]: 2024-10-09 01:01:44.060 [INFO][4368] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:01:44.151603 containerd[1474]: 2024-10-09 01:01:44.061 [INFO][4368] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4116.0.0-d-2a8a4ec573' Oct 9 01:01:44.151603 containerd[1474]: 2024-10-09 01:01:44.065 [INFO][4368] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.25561639e7d2bcca69641b20081cdfceb9970693f919094aaddbb443a311ecf1" host="ci-4116.0.0-d-2a8a4ec573" Oct 9 01:01:44.151603 containerd[1474]: 2024-10-09 01:01:44.072 [INFO][4368] ipam.go 372: Looking up existing affinities for host host="ci-4116.0.0-d-2a8a4ec573" Oct 9 01:01:44.151603 containerd[1474]: 2024-10-09 01:01:44.079 [INFO][4368] ipam.go 489: Trying affinity for 192.168.13.64/26 host="ci-4116.0.0-d-2a8a4ec573" Oct 9 01:01:44.151603 containerd[1474]: 2024-10-09 01:01:44.084 [INFO][4368] ipam.go 155: Attempting to load block cidr=192.168.13.64/26 host="ci-4116.0.0-d-2a8a4ec573" Oct 9 01:01:44.151603 containerd[1474]: 2024-10-09 01:01:44.088 [INFO][4368] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.13.64/26 host="ci-4116.0.0-d-2a8a4ec573" Oct 9 01:01:44.151603 containerd[1474]: 2024-10-09 01:01:44.088 [INFO][4368] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.13.64/26 handle="k8s-pod-network.25561639e7d2bcca69641b20081cdfceb9970693f919094aaddbb443a311ecf1" host="ci-4116.0.0-d-2a8a4ec573" Oct 9 01:01:44.151603 containerd[1474]: 2024-10-09 01:01:44.091 [INFO][4368] ipam.go 1685: Creating new handle: k8s-pod-network.25561639e7d2bcca69641b20081cdfceb9970693f919094aaddbb443a311ecf1 Oct 9 01:01:44.151603 containerd[1474]: 2024-10-09 01:01:44.096 [INFO][4368] ipam.go 1203: Writing block in order to claim IPs block=192.168.13.64/26 handle="k8s-pod-network.25561639e7d2bcca69641b20081cdfceb9970693f919094aaddbb443a311ecf1" host="ci-4116.0.0-d-2a8a4ec573" Oct 9 01:01:44.151603 containerd[1474]: 2024-10-09 01:01:44.105 [INFO][4368] ipam.go 1216: Successfully claimed IPs: [192.168.13.67/26] block=192.168.13.64/26 handle="k8s-pod-network.25561639e7d2bcca69641b20081cdfceb9970693f919094aaddbb443a311ecf1" host="ci-4116.0.0-d-2a8a4ec573" Oct 9 01:01:44.151603 containerd[1474]: 2024-10-09 01:01:44.106 [INFO][4368] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.13.67/26] handle="k8s-pod-network.25561639e7d2bcca69641b20081cdfceb9970693f919094aaddbb443a311ecf1" host="ci-4116.0.0-d-2a8a4ec573" Oct 9 01:01:44.151603 containerd[1474]: 2024-10-09 01:01:44.106 [INFO][4368] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:01:44.151603 containerd[1474]: 2024-10-09 01:01:44.106 [INFO][4368] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.13.67/26] IPv6=[] ContainerID="25561639e7d2bcca69641b20081cdfceb9970693f919094aaddbb443a311ecf1" HandleID="k8s-pod-network.25561639e7d2bcca69641b20081cdfceb9970693f919094aaddbb443a311ecf1" Workload="ci--4116.0.0--d--2a8a4ec573-k8s-calico--kube--controllers--5d99d68f9d--swq27-eth0" Oct 9 01:01:44.158773 containerd[1474]: 2024-10-09 01:01:44.111 [INFO][4346] k8s.go 386: Populated endpoint ContainerID="25561639e7d2bcca69641b20081cdfceb9970693f919094aaddbb443a311ecf1" Namespace="calico-system" Pod="calico-kube-controllers-5d99d68f9d-swq27" WorkloadEndpoint="ci--4116.0.0--d--2a8a4ec573-k8s-calico--kube--controllers--5d99d68f9d--swq27-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116.0.0--d--2a8a4ec573-k8s-calico--kube--controllers--5d99d68f9d--swq27-eth0", GenerateName:"calico-kube-controllers-5d99d68f9d-", Namespace:"calico-system", SelfLink:"", UID:"c458a646-0669-47f2-97fc-a34bf29c9bc5", ResourceVersion:"742", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 1, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d99d68f9d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116.0.0-d-2a8a4ec573", ContainerID:"", Pod:"calico-kube-controllers-5d99d68f9d-swq27", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.13.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5d57f287b15", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:01:44.158773 containerd[1474]: 2024-10-09 01:01:44.111 [INFO][4346] k8s.go 387: Calico CNI using IPs: [192.168.13.67/32] ContainerID="25561639e7d2bcca69641b20081cdfceb9970693f919094aaddbb443a311ecf1" Namespace="calico-system" Pod="calico-kube-controllers-5d99d68f9d-swq27" WorkloadEndpoint="ci--4116.0.0--d--2a8a4ec573-k8s-calico--kube--controllers--5d99d68f9d--swq27-eth0" Oct 9 01:01:44.158773 containerd[1474]: 2024-10-09 01:01:44.111 [INFO][4346] dataplane_linux.go 68: Setting the host side veth name to cali5d57f287b15 ContainerID="25561639e7d2bcca69641b20081cdfceb9970693f919094aaddbb443a311ecf1" Namespace="calico-system" Pod="calico-kube-controllers-5d99d68f9d-swq27" WorkloadEndpoint="ci--4116.0.0--d--2a8a4ec573-k8s-calico--kube--controllers--5d99d68f9d--swq27-eth0" Oct 9 01:01:44.158773 containerd[1474]: 2024-10-09 01:01:44.121 [INFO][4346] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="25561639e7d2bcca69641b20081cdfceb9970693f919094aaddbb443a311ecf1" Namespace="calico-system" Pod="calico-kube-controllers-5d99d68f9d-swq27" WorkloadEndpoint="ci--4116.0.0--d--2a8a4ec573-k8s-calico--kube--controllers--5d99d68f9d--swq27-eth0" Oct 9 01:01:44.158773 containerd[1474]: 2024-10-09 01:01:44.121 [INFO][4346] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="25561639e7d2bcca69641b20081cdfceb9970693f919094aaddbb443a311ecf1" Namespace="calico-system" Pod="calico-kube-controllers-5d99d68f9d-swq27" WorkloadEndpoint="ci--4116.0.0--d--2a8a4ec573-k8s-calico--kube--controllers--5d99d68f9d--swq27-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116.0.0--d--2a8a4ec573-k8s-calico--kube--controllers--5d99d68f9d--swq27-eth0", GenerateName:"calico-kube-controllers-5d99d68f9d-", Namespace:"calico-system", SelfLink:"", UID:"c458a646-0669-47f2-97fc-a34bf29c9bc5", ResourceVersion:"742", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 1, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d99d68f9d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116.0.0-d-2a8a4ec573", ContainerID:"25561639e7d2bcca69641b20081cdfceb9970693f919094aaddbb443a311ecf1", Pod:"calico-kube-controllers-5d99d68f9d-swq27", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.13.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5d57f287b15", MAC:"ae:3d:58:22:e5:c8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:01:44.158773 containerd[1474]: 2024-10-09 01:01:44.136 [INFO][4346] k8s.go 500: Wrote updated endpoint to datastore ContainerID="25561639e7d2bcca69641b20081cdfceb9970693f919094aaddbb443a311ecf1" Namespace="calico-system" Pod="calico-kube-controllers-5d99d68f9d-swq27" WorkloadEndpoint="ci--4116.0.0--d--2a8a4ec573-k8s-calico--kube--controllers--5d99d68f9d--swq27-eth0" Oct 9 01:01:44.219066 containerd[1474]: time="2024-10-09T01:01:44.218756566Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:01:44.219066 containerd[1474]: time="2024-10-09T01:01:44.218848196Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:01:44.219066 containerd[1474]: time="2024-10-09T01:01:44.218884648Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:01:44.221350 containerd[1474]: time="2024-10-09T01:01:44.219094804Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:01:44.266776 systemd[1]: Started cri-containerd-25561639e7d2bcca69641b20081cdfceb9970693f919094aaddbb443a311ecf1.scope - libcontainer container 25561639e7d2bcca69641b20081cdfceb9970693f919094aaddbb443a311ecf1. Oct 9 01:01:44.277348 systemd-networkd[1378]: cali4f13aea3885: Link UP Oct 9 01:01:44.279824 systemd-networkd[1378]: cali4f13aea3885: Gained carrier Oct 9 01:01:44.313319 containerd[1474]: 2024-10-09 01:01:43.945 [INFO][4356] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4116.0.0--d--2a8a4ec573-k8s-coredns--6f6b679f8f--f6s92-eth0 coredns-6f6b679f8f- kube-system 79b0b543-5f0c-4dfb-9dc8-bfadb1e6489f 743 0 2024-10-09 01:01:03 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4116.0.0-d-2a8a4ec573 coredns-6f6b679f8f-f6s92 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4f13aea3885 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="ff60d3606aa404c276d7b1231d92db7a495ab6ec6cb2c480ea6808ca1b841f64" Namespace="kube-system" Pod="coredns-6f6b679f8f-f6s92" WorkloadEndpoint="ci--4116.0.0--d--2a8a4ec573-k8s-coredns--6f6b679f8f--f6s92-" Oct 9 01:01:44.313319 containerd[1474]: 2024-10-09 01:01:43.945 [INFO][4356] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ff60d3606aa404c276d7b1231d92db7a495ab6ec6cb2c480ea6808ca1b841f64" Namespace="kube-system" Pod="coredns-6f6b679f8f-f6s92" WorkloadEndpoint="ci--4116.0.0--d--2a8a4ec573-k8s-coredns--6f6b679f8f--f6s92-eth0" Oct 9 01:01:44.313319 containerd[1474]: 2024-10-09 01:01:44.002 [INFO][4374] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ff60d3606aa404c276d7b1231d92db7a495ab6ec6cb2c480ea6808ca1b841f64" HandleID="k8s-pod-network.ff60d3606aa404c276d7b1231d92db7a495ab6ec6cb2c480ea6808ca1b841f64" Workload="ci--4116.0.0--d--2a8a4ec573-k8s-coredns--6f6b679f8f--f6s92-eth0" Oct 9 01:01:44.313319 containerd[1474]: 2024-10-09 01:01:44.063 [INFO][4374] ipam_plugin.go 270: Auto assigning IP ContainerID="ff60d3606aa404c276d7b1231d92db7a495ab6ec6cb2c480ea6808ca1b841f64" HandleID="k8s-pod-network.ff60d3606aa404c276d7b1231d92db7a495ab6ec6cb2c480ea6808ca1b841f64" Workload="ci--4116.0.0--d--2a8a4ec573-k8s-coredns--6f6b679f8f--f6s92-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318300), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4116.0.0-d-2a8a4ec573", "pod":"coredns-6f6b679f8f-f6s92", "timestamp":"2024-10-09 01:01:44.002801822 +0000 UTC"}, Hostname:"ci-4116.0.0-d-2a8a4ec573", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 01:01:44.313319 containerd[1474]: 2024-10-09 01:01:44.063 [INFO][4374] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:01:44.313319 containerd[1474]: 2024-10-09 01:01:44.106 [INFO][4374] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:01:44.313319 containerd[1474]: 2024-10-09 01:01:44.106 [INFO][4374] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4116.0.0-d-2a8a4ec573' Oct 9 01:01:44.313319 containerd[1474]: 2024-10-09 01:01:44.170 [INFO][4374] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ff60d3606aa404c276d7b1231d92db7a495ab6ec6cb2c480ea6808ca1b841f64" host="ci-4116.0.0-d-2a8a4ec573" Oct 9 01:01:44.313319 containerd[1474]: 2024-10-09 01:01:44.181 [INFO][4374] ipam.go 372: Looking up existing affinities for host host="ci-4116.0.0-d-2a8a4ec573" Oct 9 01:01:44.313319 containerd[1474]: 2024-10-09 01:01:44.202 [INFO][4374] ipam.go 489: Trying affinity for 192.168.13.64/26 host="ci-4116.0.0-d-2a8a4ec573" Oct 9 01:01:44.313319 containerd[1474]: 2024-10-09 01:01:44.209 [INFO][4374] ipam.go 155: Attempting to load block cidr=192.168.13.64/26 host="ci-4116.0.0-d-2a8a4ec573" Oct 9 01:01:44.313319 containerd[1474]: 2024-10-09 01:01:44.217 [INFO][4374] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.13.64/26 host="ci-4116.0.0-d-2a8a4ec573" Oct 9 01:01:44.313319 containerd[1474]: 2024-10-09 01:01:44.217 [INFO][4374] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.13.64/26 handle="k8s-pod-network.ff60d3606aa404c276d7b1231d92db7a495ab6ec6cb2c480ea6808ca1b841f64" host="ci-4116.0.0-d-2a8a4ec573" Oct 9 01:01:44.313319 containerd[1474]: 2024-10-09 01:01:44.229 [INFO][4374] ipam.go 1685: Creating new handle: k8s-pod-network.ff60d3606aa404c276d7b1231d92db7a495ab6ec6cb2c480ea6808ca1b841f64 Oct 9 01:01:44.313319 containerd[1474]: 2024-10-09 01:01:44.245 [INFO][4374] ipam.go 1203: Writing block in order to claim IPs block=192.168.13.64/26 handle="k8s-pod-network.ff60d3606aa404c276d7b1231d92db7a495ab6ec6cb2c480ea6808ca1b841f64" host="ci-4116.0.0-d-2a8a4ec573" Oct 9 01:01:44.313319 containerd[1474]: 2024-10-09 01:01:44.260 [INFO][4374] ipam.go 1216: Successfully claimed IPs: [192.168.13.68/26] block=192.168.13.64/26 handle="k8s-pod-network.ff60d3606aa404c276d7b1231d92db7a495ab6ec6cb2c480ea6808ca1b841f64" host="ci-4116.0.0-d-2a8a4ec573" Oct 9 01:01:44.313319 containerd[1474]: 2024-10-09 01:01:44.260 [INFO][4374] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.13.68/26] handle="k8s-pod-network.ff60d3606aa404c276d7b1231d92db7a495ab6ec6cb2c480ea6808ca1b841f64" host="ci-4116.0.0-d-2a8a4ec573" Oct 9 01:01:44.313319 containerd[1474]: 2024-10-09 01:01:44.260 [INFO][4374] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:01:44.313319 containerd[1474]: 2024-10-09 01:01:44.260 [INFO][4374] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.13.68/26] IPv6=[] ContainerID="ff60d3606aa404c276d7b1231d92db7a495ab6ec6cb2c480ea6808ca1b841f64" HandleID="k8s-pod-network.ff60d3606aa404c276d7b1231d92db7a495ab6ec6cb2c480ea6808ca1b841f64" Workload="ci--4116.0.0--d--2a8a4ec573-k8s-coredns--6f6b679f8f--f6s92-eth0" Oct 9 01:01:44.314975 containerd[1474]: 2024-10-09 01:01:44.269 [INFO][4356] k8s.go 386: Populated endpoint ContainerID="ff60d3606aa404c276d7b1231d92db7a495ab6ec6cb2c480ea6808ca1b841f64" Namespace="kube-system" Pod="coredns-6f6b679f8f-f6s92" WorkloadEndpoint="ci--4116.0.0--d--2a8a4ec573-k8s-coredns--6f6b679f8f--f6s92-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116.0.0--d--2a8a4ec573-k8s-coredns--6f6b679f8f--f6s92-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"79b0b543-5f0c-4dfb-9dc8-bfadb1e6489f", ResourceVersion:"743", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 1, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116.0.0-d-2a8a4ec573", ContainerID:"", Pod:"coredns-6f6b679f8f-f6s92", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.13.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4f13aea3885", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:01:44.314975 containerd[1474]: 2024-10-09 01:01:44.270 [INFO][4356] k8s.go 387: Calico CNI using IPs: [192.168.13.68/32] ContainerID="ff60d3606aa404c276d7b1231d92db7a495ab6ec6cb2c480ea6808ca1b841f64" Namespace="kube-system" Pod="coredns-6f6b679f8f-f6s92" WorkloadEndpoint="ci--4116.0.0--d--2a8a4ec573-k8s-coredns--6f6b679f8f--f6s92-eth0" Oct 9 01:01:44.314975 containerd[1474]: 2024-10-09 01:01:44.270 [INFO][4356] dataplane_linux.go 68: Setting the host side veth name to cali4f13aea3885 ContainerID="ff60d3606aa404c276d7b1231d92db7a495ab6ec6cb2c480ea6808ca1b841f64" Namespace="kube-system" Pod="coredns-6f6b679f8f-f6s92" WorkloadEndpoint="ci--4116.0.0--d--2a8a4ec573-k8s-coredns--6f6b679f8f--f6s92-eth0" Oct 9 01:01:44.314975 containerd[1474]: 2024-10-09 01:01:44.280 [INFO][4356] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="ff60d3606aa404c276d7b1231d92db7a495ab6ec6cb2c480ea6808ca1b841f64" Namespace="kube-system" Pod="coredns-6f6b679f8f-f6s92" WorkloadEndpoint="ci--4116.0.0--d--2a8a4ec573-k8s-coredns--6f6b679f8f--f6s92-eth0" Oct 9 01:01:44.314975 containerd[1474]: 2024-10-09 01:01:44.283 [INFO][4356] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ff60d3606aa404c276d7b1231d92db7a495ab6ec6cb2c480ea6808ca1b841f64" Namespace="kube-system" Pod="coredns-6f6b679f8f-f6s92" WorkloadEndpoint="ci--4116.0.0--d--2a8a4ec573-k8s-coredns--6f6b679f8f--f6s92-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116.0.0--d--2a8a4ec573-k8s-coredns--6f6b679f8f--f6s92-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"79b0b543-5f0c-4dfb-9dc8-bfadb1e6489f", ResourceVersion:"743", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 1, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116.0.0-d-2a8a4ec573", ContainerID:"ff60d3606aa404c276d7b1231d92db7a495ab6ec6cb2c480ea6808ca1b841f64", Pod:"coredns-6f6b679f8f-f6s92", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.13.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4f13aea3885", MAC:"3e:4e:44:f6:31:38", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:01:44.314975 containerd[1474]: 2024-10-09 01:01:44.306 [INFO][4356] k8s.go 500: Wrote updated endpoint to datastore ContainerID="ff60d3606aa404c276d7b1231d92db7a495ab6ec6cb2c480ea6808ca1b841f64" Namespace="kube-system" Pod="coredns-6f6b679f8f-f6s92" WorkloadEndpoint="ci--4116.0.0--d--2a8a4ec573-k8s-coredns--6f6b679f8f--f6s92-eth0" Oct 9 01:01:44.388009 containerd[1474]: time="2024-10-09T01:01:44.387473751Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:01:44.388009 containerd[1474]: time="2024-10-09T01:01:44.387548386Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:01:44.388009 containerd[1474]: time="2024-10-09T01:01:44.387562444Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:01:44.388009 containerd[1474]: time="2024-10-09T01:01:44.387672067Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:01:44.427371 systemd[1]: Started cri-containerd-ff60d3606aa404c276d7b1231d92db7a495ab6ec6cb2c480ea6808ca1b841f64.scope - libcontainer container ff60d3606aa404c276d7b1231d92db7a495ab6ec6cb2c480ea6808ca1b841f64. Oct 9 01:01:44.446039 containerd[1474]: time="2024-10-09T01:01:44.445855397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d99d68f9d-swq27,Uid:c458a646-0669-47f2-97fc-a34bf29c9bc5,Namespace:calico-system,Attempt:1,} returns sandbox id \"25561639e7d2bcca69641b20081cdfceb9970693f919094aaddbb443a311ecf1\"" Oct 9 01:01:44.488774 containerd[1474]: time="2024-10-09T01:01:44.487258513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-f6s92,Uid:79b0b543-5f0c-4dfb-9dc8-bfadb1e6489f,Namespace:kube-system,Attempt:1,} returns sandbox id \"ff60d3606aa404c276d7b1231d92db7a495ab6ec6cb2c480ea6808ca1b841f64\"" Oct 9 01:01:44.507710 kubelet[2577]: E1009 01:01:44.507654 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 01:01:44.510326 containerd[1474]: time="2024-10-09T01:01:44.510092417Z" level=info msg="CreateContainer within sandbox \"ff60d3606aa404c276d7b1231d92db7a495ab6ec6cb2c480ea6808ca1b841f64\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 9 01:01:44.534209 containerd[1474]: time="2024-10-09T01:01:44.534063901Z" level=info msg="CreateContainer within sandbox \"ff60d3606aa404c276d7b1231d92db7a495ab6ec6cb2c480ea6808ca1b841f64\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"eb1b60cf27c518af5116b3ae892f42a713cbaa1ffe608291765d69b45691edb5\"" Oct 9 01:01:44.535180 containerd[1474]: time="2024-10-09T01:01:44.534953104Z" level=info msg="StartContainer for \"eb1b60cf27c518af5116b3ae892f42a713cbaa1ffe608291765d69b45691edb5\"" Oct 9 01:01:44.588755 systemd[1]: Started cri-containerd-eb1b60cf27c518af5116b3ae892f42a713cbaa1ffe608291765d69b45691edb5.scope - libcontainer container eb1b60cf27c518af5116b3ae892f42a713cbaa1ffe608291765d69b45691edb5. Oct 9 01:01:44.637493 containerd[1474]: time="2024-10-09T01:01:44.637052119Z" level=info msg="StartContainer for \"eb1b60cf27c518af5116b3ae892f42a713cbaa1ffe608291765d69b45691edb5\" returns successfully" Oct 9 01:01:44.965003 kubelet[2577]: E1009 01:01:44.964960 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 01:01:44.966114 kubelet[2577]: E1009 01:01:44.966085 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 01:01:45.001270 containerd[1474]: time="2024-10-09T01:01:45.001205265Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:01:45.004902 containerd[1474]: time="2024-10-09T01:01:45.004826060Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1: active requests=0, bytes read=12907822" Oct 9 01:01:45.006504 containerd[1474]: time="2024-10-09T01:01:45.006417316Z" level=info msg="ImageCreate event name:\"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:01:45.014169 containerd[1474]: time="2024-10-09T01:01:45.014070626Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:01:45.020806 containerd[1474]: time="2024-10-09T01:01:45.020654027Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" with image id \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\", size \"14400175\" in 1.769557085s" Oct 9 01:01:45.020806 containerd[1474]: time="2024-10-09T01:01:45.020696408Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" returns image reference \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\"" Oct 9 01:01:45.024946 containerd[1474]: time="2024-10-09T01:01:45.023565737Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\"" Oct 9 01:01:45.028425 containerd[1474]: time="2024-10-09T01:01:45.028254594Z" level=info msg="CreateContainer within sandbox \"b55ce21a95139a92ec4a09f9708447497842afad790a81f73ddc92127596b15c\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Oct 9 01:01:45.052594 containerd[1474]: time="2024-10-09T01:01:45.052424458Z" level=info msg="CreateContainer within sandbox \"b55ce21a95139a92ec4a09f9708447497842afad790a81f73ddc92127596b15c\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"b943aff3682f50a572a33b81c0fbc530d2230273357eea28602eb6c11f53ef7a\"" Oct 9 01:01:45.055024 containerd[1474]: time="2024-10-09T01:01:45.053738140Z" level=info msg="StartContainer for \"b943aff3682f50a572a33b81c0fbc530d2230273357eea28602eb6c11f53ef7a\"" Oct 9 01:01:45.106153 systemd[1]: Started cri-containerd-b943aff3682f50a572a33b81c0fbc530d2230273357eea28602eb6c11f53ef7a.scope - libcontainer container b943aff3682f50a572a33b81c0fbc530d2230273357eea28602eb6c11f53ef7a. Oct 9 01:01:45.159276 containerd[1474]: time="2024-10-09T01:01:45.159072093Z" level=info msg="StartContainer for \"b943aff3682f50a572a33b81c0fbc530d2230273357eea28602eb6c11f53ef7a\" returns successfully" Oct 9 01:01:45.810830 kubelet[2577]: I1009 01:01:45.810772 2577 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Oct 9 01:01:45.813005 kubelet[2577]: I1009 01:01:45.812804 2577 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Oct 9 01:01:45.972308 kubelet[2577]: E1009 01:01:45.970370 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 01:01:45.998774 kubelet[2577]: I1009 01:01:45.998707 2577 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-f6s92" podStartSLOduration=42.998682803 podStartE2EDuration="42.998682803s" podCreationTimestamp="2024-10-09 01:01:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 01:01:44.98792018 +0000 UTC m=+46.521753707" watchObservedRunningTime="2024-10-09 01:01:45.998682803 +0000 UTC m=+47.532516330" Oct 9 01:01:46.000111 kubelet[2577]: I1009 01:01:45.999965 2577 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-p8pzw" podStartSLOduration=25.405991203 podStartE2EDuration="28.999937412s" podCreationTimestamp="2024-10-09 01:01:17 +0000 UTC" firstStartedPulling="2024-10-09 01:01:41.428175734 +0000 UTC m=+42.962009241" lastFinishedPulling="2024-10-09 01:01:45.022121931 +0000 UTC m=+46.555955450" observedRunningTime="2024-10-09 01:01:45.99865103 +0000 UTC m=+47.532484569" watchObservedRunningTime="2024-10-09 01:01:45.999937412 +0000 UTC m=+47.533770939" Oct 9 01:01:46.093730 systemd-networkd[1378]: cali5d57f287b15: Gained IPv6LL Oct 9 01:01:46.157826 systemd-networkd[1378]: cali4f13aea3885: Gained IPv6LL Oct 9 01:01:46.974988 kubelet[2577]: E1009 01:01:46.974507 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 01:01:47.283628 containerd[1474]: time="2024-10-09T01:01:47.282257009Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:01:47.285986 containerd[1474]: time="2024-10-09T01:01:47.285907254Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.1: active requests=0, bytes read=33507125" Oct 9 01:01:47.315676 containerd[1474]: time="2024-10-09T01:01:47.315543237Z" level=info msg="ImageCreate event name:\"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:01:47.337502 containerd[1474]: time="2024-10-09T01:01:47.337409788Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:01:47.340155 containerd[1474]: time="2024-10-09T01:01:47.339654392Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" with image id \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\", size \"34999494\" in 2.316040797s" Oct 9 01:01:47.340155 containerd[1474]: time="2024-10-09T01:01:47.339702785Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" returns image reference \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\"" Oct 9 01:01:47.383758 containerd[1474]: time="2024-10-09T01:01:47.383555244Z" level=info msg="CreateContainer within sandbox \"25561639e7d2bcca69641b20081cdfceb9970693f919094aaddbb443a311ecf1\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Oct 9 01:01:47.420077 containerd[1474]: time="2024-10-09T01:01:47.420017322Z" level=info msg="CreateContainer within sandbox \"25561639e7d2bcca69641b20081cdfceb9970693f919094aaddbb443a311ecf1\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"6ef94a37980f3514fcb24289a5310743b9ff95243d78be09aff74848bef684a7\"" Oct 9 01:01:47.422925 containerd[1474]: time="2024-10-09T01:01:47.422829971Z" level=info msg="StartContainer for \"6ef94a37980f3514fcb24289a5310743b9ff95243d78be09aff74848bef684a7\"" Oct 9 01:01:47.573142 systemd[1]: Started cri-containerd-6ef94a37980f3514fcb24289a5310743b9ff95243d78be09aff74848bef684a7.scope - libcontainer container 6ef94a37980f3514fcb24289a5310743b9ff95243d78be09aff74848bef684a7. Oct 9 01:01:47.689569 containerd[1474]: time="2024-10-09T01:01:47.688849890Z" level=info msg="StartContainer for \"6ef94a37980f3514fcb24289a5310743b9ff95243d78be09aff74848bef684a7\" returns successfully" Oct 9 01:01:47.985509 kubelet[2577]: E1009 01:01:47.985414 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 01:01:48.017211 kubelet[2577]: I1009 01:01:48.016695 2577 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5d99d68f9d-swq27" podStartSLOduration=28.124878532 podStartE2EDuration="31.016669606s" podCreationTimestamp="2024-10-09 01:01:17 +0000 UTC" firstStartedPulling="2024-10-09 01:01:44.448681755 +0000 UTC m=+45.982515266" lastFinishedPulling="2024-10-09 01:01:47.340472833 +0000 UTC m=+48.874306340" observedRunningTime="2024-10-09 01:01:48.01235533 +0000 UTC m=+49.546188882" watchObservedRunningTime="2024-10-09 01:01:48.016669606 +0000 UTC m=+49.550503132" Oct 9 01:01:48.270698 kubelet[2577]: E1009 01:01:48.270565 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 01:01:48.989733 kubelet[2577]: I1009 01:01:48.988906 2577 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 9 01:01:51.158411 systemd[1]: Started sshd@9-165.232.149.110:22-139.178.68.195:38742.service - OpenSSH per-connection server daemon (139.178.68.195:38742). Oct 9 01:01:51.273680 sshd[4658]: Accepted publickey for core from 139.178.68.195 port 38742 ssh2: RSA SHA256:rAraWF6dAhtbVQzAuCRwvYKxEoENakeAe95MuXIlOkk Oct 9 01:01:51.276039 sshd[4658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:01:51.282578 systemd-logind[1456]: New session 10 of user core. Oct 9 01:01:51.292848 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 9 01:01:51.872936 sshd[4658]: pam_unix(sshd:session): session closed for user core Oct 9 01:01:51.877721 systemd[1]: sshd@9-165.232.149.110:22-139.178.68.195:38742.service: Deactivated successfully. Oct 9 01:01:51.880196 systemd[1]: session-10.scope: Deactivated successfully. Oct 9 01:01:51.884098 systemd-logind[1456]: Session 10 logged out. Waiting for processes to exit. Oct 9 01:01:51.887325 systemd-logind[1456]: Removed session 10. Oct 9 01:01:55.881081 kubelet[2577]: I1009 01:01:55.880572 2577 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 9 01:01:56.889939 systemd[1]: Started sshd@10-165.232.149.110:22-139.178.68.195:38744.service - OpenSSH per-connection server daemon (139.178.68.195:38744). Oct 9 01:01:56.955840 sshd[4725]: Accepted publickey for core from 139.178.68.195 port 38744 ssh2: RSA SHA256:rAraWF6dAhtbVQzAuCRwvYKxEoENakeAe95MuXIlOkk Oct 9 01:01:56.958426 sshd[4725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:01:56.965382 systemd-logind[1456]: New session 11 of user core. Oct 9 01:01:56.969826 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 9 01:01:57.189784 sshd[4725]: pam_unix(sshd:session): session closed for user core Oct 9 01:01:57.194729 systemd-logind[1456]: Session 11 logged out. Waiting for processes to exit. Oct 9 01:01:57.195779 systemd[1]: sshd@10-165.232.149.110:22-139.178.68.195:38744.service: Deactivated successfully. Oct 9 01:01:57.198357 systemd[1]: session-11.scope: Deactivated successfully. Oct 9 01:01:57.200420 systemd-logind[1456]: Removed session 11. Oct 9 01:01:58.593903 containerd[1474]: time="2024-10-09T01:01:58.593855403Z" level=info msg="StopPodSandbox for \"fddcaff6240e61f23bf00c04f4fd4e016011a8807c84e804e6f11e18c3d4f74b\"" Oct 9 01:01:58.712518 containerd[1474]: 2024-10-09 01:01:58.655 [WARNING][4750] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fddcaff6240e61f23bf00c04f4fd4e016011a8807c84e804e6f11e18c3d4f74b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116.0.0--d--2a8a4ec573-k8s-csi--node--driver--p8pzw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e8f89e29-cfb4-4b5e-a7cb-66ed0f9162bb", ResourceVersion:"772", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 1, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"779867c8f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116.0.0-d-2a8a4ec573", ContainerID:"b55ce21a95139a92ec4a09f9708447497842afad790a81f73ddc92127596b15c", Pod:"csi-node-driver-p8pzw", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.13.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali78ccef8953d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:01:58.712518 containerd[1474]: 2024-10-09 01:01:58.656 [INFO][4750] k8s.go 608: Cleaning up netns ContainerID="fddcaff6240e61f23bf00c04f4fd4e016011a8807c84e804e6f11e18c3d4f74b" Oct 9 01:01:58.712518 containerd[1474]: 2024-10-09 01:01:58.656 [INFO][4750] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="fddcaff6240e61f23bf00c04f4fd4e016011a8807c84e804e6f11e18c3d4f74b" iface="eth0" netns="" Oct 9 01:01:58.712518 containerd[1474]: 2024-10-09 01:01:58.656 [INFO][4750] k8s.go 615: Releasing IP address(es) ContainerID="fddcaff6240e61f23bf00c04f4fd4e016011a8807c84e804e6f11e18c3d4f74b" Oct 9 01:01:58.712518 containerd[1474]: 2024-10-09 01:01:58.656 [INFO][4750] utils.go 188: Calico CNI releasing IP address ContainerID="fddcaff6240e61f23bf00c04f4fd4e016011a8807c84e804e6f11e18c3d4f74b" Oct 9 01:01:58.712518 containerd[1474]: 2024-10-09 01:01:58.692 [INFO][4758] ipam_plugin.go 417: Releasing address using handleID ContainerID="fddcaff6240e61f23bf00c04f4fd4e016011a8807c84e804e6f11e18c3d4f74b" HandleID="k8s-pod-network.fddcaff6240e61f23bf00c04f4fd4e016011a8807c84e804e6f11e18c3d4f74b" Workload="ci--4116.0.0--d--2a8a4ec573-k8s-csi--node--driver--p8pzw-eth0" Oct 9 01:01:58.712518 containerd[1474]: 2024-10-09 01:01:58.692 [INFO][4758] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:01:58.712518 containerd[1474]: 2024-10-09 01:01:58.692 [INFO][4758] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:01:58.712518 containerd[1474]: 2024-10-09 01:01:58.703 [WARNING][4758] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="fddcaff6240e61f23bf00c04f4fd4e016011a8807c84e804e6f11e18c3d4f74b" HandleID="k8s-pod-network.fddcaff6240e61f23bf00c04f4fd4e016011a8807c84e804e6f11e18c3d4f74b" Workload="ci--4116.0.0--d--2a8a4ec573-k8s-csi--node--driver--p8pzw-eth0" Oct 9 01:01:58.712518 containerd[1474]: 2024-10-09 01:01:58.704 [INFO][4758] ipam_plugin.go 445: Releasing address using workloadID ContainerID="fddcaff6240e61f23bf00c04f4fd4e016011a8807c84e804e6f11e18c3d4f74b" HandleID="k8s-pod-network.fddcaff6240e61f23bf00c04f4fd4e016011a8807c84e804e6f11e18c3d4f74b" Workload="ci--4116.0.0--d--2a8a4ec573-k8s-csi--node--driver--p8pzw-eth0" Oct 9 01:01:58.712518 containerd[1474]: 2024-10-09 01:01:58.708 [INFO][4758] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:01:58.712518 containerd[1474]: 2024-10-09 01:01:58.710 [INFO][4750] k8s.go 621: Teardown processing complete. ContainerID="fddcaff6240e61f23bf00c04f4fd4e016011a8807c84e804e6f11e18c3d4f74b" Oct 9 01:01:58.713548 containerd[1474]: time="2024-10-09T01:01:58.712557509Z" level=info msg="TearDown network for sandbox \"fddcaff6240e61f23bf00c04f4fd4e016011a8807c84e804e6f11e18c3d4f74b\" successfully" Oct 9 01:01:58.713548 containerd[1474]: time="2024-10-09T01:01:58.712584169Z" level=info msg="StopPodSandbox for \"fddcaff6240e61f23bf00c04f4fd4e016011a8807c84e804e6f11e18c3d4f74b\" returns successfully" Oct 9 01:01:58.713548 containerd[1474]: time="2024-10-09T01:01:58.713140667Z" level=info msg="RemovePodSandbox for \"fddcaff6240e61f23bf00c04f4fd4e016011a8807c84e804e6f11e18c3d4f74b\"" Oct 9 01:01:58.721495 containerd[1474]: time="2024-10-09T01:01:58.721410425Z" level=info msg="Forcibly stopping sandbox \"fddcaff6240e61f23bf00c04f4fd4e016011a8807c84e804e6f11e18c3d4f74b\"" Oct 9 01:01:58.828832 containerd[1474]: 2024-10-09 01:01:58.782 [WARNING][4776] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fddcaff6240e61f23bf00c04f4fd4e016011a8807c84e804e6f11e18c3d4f74b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116.0.0--d--2a8a4ec573-k8s-csi--node--driver--p8pzw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e8f89e29-cfb4-4b5e-a7cb-66ed0f9162bb", ResourceVersion:"772", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 1, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"779867c8f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116.0.0-d-2a8a4ec573", ContainerID:"b55ce21a95139a92ec4a09f9708447497842afad790a81f73ddc92127596b15c", Pod:"csi-node-driver-p8pzw", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.13.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali78ccef8953d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:01:58.828832 containerd[1474]: 2024-10-09 01:01:58.782 [INFO][4776] k8s.go 608: Cleaning up netns ContainerID="fddcaff6240e61f23bf00c04f4fd4e016011a8807c84e804e6f11e18c3d4f74b" Oct 9 01:01:58.828832 containerd[1474]: 2024-10-09 01:01:58.782 [INFO][4776] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="fddcaff6240e61f23bf00c04f4fd4e016011a8807c84e804e6f11e18c3d4f74b" iface="eth0" netns="" Oct 9 01:01:58.828832 containerd[1474]: 2024-10-09 01:01:58.782 [INFO][4776] k8s.go 615: Releasing IP address(es) ContainerID="fddcaff6240e61f23bf00c04f4fd4e016011a8807c84e804e6f11e18c3d4f74b" Oct 9 01:01:58.828832 containerd[1474]: 2024-10-09 01:01:58.782 [INFO][4776] utils.go 188: Calico CNI releasing IP address ContainerID="fddcaff6240e61f23bf00c04f4fd4e016011a8807c84e804e6f11e18c3d4f74b" Oct 9 01:01:58.828832 containerd[1474]: 2024-10-09 01:01:58.814 [INFO][4783] ipam_plugin.go 417: Releasing address using handleID ContainerID="fddcaff6240e61f23bf00c04f4fd4e016011a8807c84e804e6f11e18c3d4f74b" HandleID="k8s-pod-network.fddcaff6240e61f23bf00c04f4fd4e016011a8807c84e804e6f11e18c3d4f74b" Workload="ci--4116.0.0--d--2a8a4ec573-k8s-csi--node--driver--p8pzw-eth0" Oct 9 01:01:58.828832 containerd[1474]: 2024-10-09 01:01:58.814 [INFO][4783] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:01:58.828832 containerd[1474]: 2024-10-09 01:01:58.814 [INFO][4783] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:01:58.828832 containerd[1474]: 2024-10-09 01:01:58.822 [WARNING][4783] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="fddcaff6240e61f23bf00c04f4fd4e016011a8807c84e804e6f11e18c3d4f74b" HandleID="k8s-pod-network.fddcaff6240e61f23bf00c04f4fd4e016011a8807c84e804e6f11e18c3d4f74b" Workload="ci--4116.0.0--d--2a8a4ec573-k8s-csi--node--driver--p8pzw-eth0" Oct 9 01:01:58.828832 containerd[1474]: 2024-10-09 01:01:58.822 [INFO][4783] ipam_plugin.go 445: Releasing address using workloadID ContainerID="fddcaff6240e61f23bf00c04f4fd4e016011a8807c84e804e6f11e18c3d4f74b" HandleID="k8s-pod-network.fddcaff6240e61f23bf00c04f4fd4e016011a8807c84e804e6f11e18c3d4f74b" Workload="ci--4116.0.0--d--2a8a4ec573-k8s-csi--node--driver--p8pzw-eth0" Oct 9 01:01:58.828832 containerd[1474]: 2024-10-09 01:01:58.824 [INFO][4783] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:01:58.828832 containerd[1474]: 2024-10-09 01:01:58.826 [INFO][4776] k8s.go 621: Teardown processing complete. ContainerID="fddcaff6240e61f23bf00c04f4fd4e016011a8807c84e804e6f11e18c3d4f74b" Oct 9 01:01:58.829456 containerd[1474]: time="2024-10-09T01:01:58.828896519Z" level=info msg="TearDown network for sandbox \"fddcaff6240e61f23bf00c04f4fd4e016011a8807c84e804e6f11e18c3d4f74b\" successfully" Oct 9 01:01:58.841785 containerd[1474]: time="2024-10-09T01:01:58.841720323Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fddcaff6240e61f23bf00c04f4fd4e016011a8807c84e804e6f11e18c3d4f74b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 01:01:58.841966 containerd[1474]: time="2024-10-09T01:01:58.841851669Z" level=info msg="RemovePodSandbox \"fddcaff6240e61f23bf00c04f4fd4e016011a8807c84e804e6f11e18c3d4f74b\" returns successfully" Oct 9 01:01:58.842615 containerd[1474]: time="2024-10-09T01:01:58.842574636Z" level=info msg="StopPodSandbox for \"98b08d0e294dc569debc0a079cdd08bc4658684d5776dc9a5c7210ad6ddf7f31\"" Oct 9 01:01:58.954184 containerd[1474]: 2024-10-09 01:01:58.892 [WARNING][4801] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="98b08d0e294dc569debc0a079cdd08bc4658684d5776dc9a5c7210ad6ddf7f31" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116.0.0--d--2a8a4ec573-k8s-coredns--6f6b679f8f--f6s92-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"79b0b543-5f0c-4dfb-9dc8-bfadb1e6489f", ResourceVersion:"774", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 1, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116.0.0-d-2a8a4ec573", ContainerID:"ff60d3606aa404c276d7b1231d92db7a495ab6ec6cb2c480ea6808ca1b841f64", Pod:"coredns-6f6b679f8f-f6s92", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.13.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4f13aea3885", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:01:58.954184 containerd[1474]: 2024-10-09 01:01:58.893 [INFO][4801] k8s.go 608: Cleaning up netns ContainerID="98b08d0e294dc569debc0a079cdd08bc4658684d5776dc9a5c7210ad6ddf7f31" Oct 9 01:01:58.954184 containerd[1474]: 2024-10-09 01:01:58.893 [INFO][4801] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="98b08d0e294dc569debc0a079cdd08bc4658684d5776dc9a5c7210ad6ddf7f31" iface="eth0" netns="" Oct 9 01:01:58.954184 containerd[1474]: 2024-10-09 01:01:58.893 [INFO][4801] k8s.go 615: Releasing IP address(es) ContainerID="98b08d0e294dc569debc0a079cdd08bc4658684d5776dc9a5c7210ad6ddf7f31" Oct 9 01:01:58.954184 containerd[1474]: 2024-10-09 01:01:58.893 [INFO][4801] utils.go 188: Calico CNI releasing IP address ContainerID="98b08d0e294dc569debc0a079cdd08bc4658684d5776dc9a5c7210ad6ddf7f31" Oct 9 01:01:58.954184 containerd[1474]: 2024-10-09 01:01:58.939 [INFO][4808] ipam_plugin.go 417: Releasing address using handleID ContainerID="98b08d0e294dc569debc0a079cdd08bc4658684d5776dc9a5c7210ad6ddf7f31" HandleID="k8s-pod-network.98b08d0e294dc569debc0a079cdd08bc4658684d5776dc9a5c7210ad6ddf7f31" Workload="ci--4116.0.0--d--2a8a4ec573-k8s-coredns--6f6b679f8f--f6s92-eth0" Oct 9 01:01:58.954184 containerd[1474]: 2024-10-09 01:01:58.939 [INFO][4808] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:01:58.954184 containerd[1474]: 2024-10-09 01:01:58.939 [INFO][4808] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:01:58.954184 containerd[1474]: 2024-10-09 01:01:58.947 [WARNING][4808] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="98b08d0e294dc569debc0a079cdd08bc4658684d5776dc9a5c7210ad6ddf7f31" HandleID="k8s-pod-network.98b08d0e294dc569debc0a079cdd08bc4658684d5776dc9a5c7210ad6ddf7f31" Workload="ci--4116.0.0--d--2a8a4ec573-k8s-coredns--6f6b679f8f--f6s92-eth0" Oct 9 01:01:58.954184 containerd[1474]: 2024-10-09 01:01:58.947 [INFO][4808] ipam_plugin.go 445: Releasing address using workloadID ContainerID="98b08d0e294dc569debc0a079cdd08bc4658684d5776dc9a5c7210ad6ddf7f31" HandleID="k8s-pod-network.98b08d0e294dc569debc0a079cdd08bc4658684d5776dc9a5c7210ad6ddf7f31" Workload="ci--4116.0.0--d--2a8a4ec573-k8s-coredns--6f6b679f8f--f6s92-eth0" Oct 9 01:01:58.954184 containerd[1474]: 2024-10-09 01:01:58.950 [INFO][4808] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:01:58.954184 containerd[1474]: 2024-10-09 01:01:58.952 [INFO][4801] k8s.go 621: Teardown processing complete. ContainerID="98b08d0e294dc569debc0a079cdd08bc4658684d5776dc9a5c7210ad6ddf7f31" Oct 9 01:01:58.954184 containerd[1474]: time="2024-10-09T01:01:58.954096323Z" level=info msg="TearDown network for sandbox \"98b08d0e294dc569debc0a079cdd08bc4658684d5776dc9a5c7210ad6ddf7f31\" successfully" Oct 9 01:01:58.954184 containerd[1474]: time="2024-10-09T01:01:58.954122213Z" level=info msg="StopPodSandbox for \"98b08d0e294dc569debc0a079cdd08bc4658684d5776dc9a5c7210ad6ddf7f31\" returns successfully" Oct 9 01:01:58.955935 containerd[1474]: time="2024-10-09T01:01:58.955891351Z" level=info msg="RemovePodSandbox for \"98b08d0e294dc569debc0a079cdd08bc4658684d5776dc9a5c7210ad6ddf7f31\"" Oct 9 01:01:58.955935 containerd[1474]: time="2024-10-09T01:01:58.955938303Z" level=info msg="Forcibly stopping sandbox \"98b08d0e294dc569debc0a079cdd08bc4658684d5776dc9a5c7210ad6ddf7f31\"" Oct 9 01:01:59.061413 containerd[1474]: 2024-10-09 01:01:59.003 [WARNING][4826] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="98b08d0e294dc569debc0a079cdd08bc4658684d5776dc9a5c7210ad6ddf7f31" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116.0.0--d--2a8a4ec573-k8s-coredns--6f6b679f8f--f6s92-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"79b0b543-5f0c-4dfb-9dc8-bfadb1e6489f", ResourceVersion:"774", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 1, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116.0.0-d-2a8a4ec573", ContainerID:"ff60d3606aa404c276d7b1231d92db7a495ab6ec6cb2c480ea6808ca1b841f64", Pod:"coredns-6f6b679f8f-f6s92", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.13.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4f13aea3885", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:01:59.061413 containerd[1474]: 2024-10-09 01:01:59.004 [INFO][4826] k8s.go 608: Cleaning up netns ContainerID="98b08d0e294dc569debc0a079cdd08bc4658684d5776dc9a5c7210ad6ddf7f31" Oct 9 01:01:59.061413 containerd[1474]: 2024-10-09 01:01:59.004 [INFO][4826] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="98b08d0e294dc569debc0a079cdd08bc4658684d5776dc9a5c7210ad6ddf7f31" iface="eth0" netns="" Oct 9 01:01:59.061413 containerd[1474]: 2024-10-09 01:01:59.004 [INFO][4826] k8s.go 615: Releasing IP address(es) ContainerID="98b08d0e294dc569debc0a079cdd08bc4658684d5776dc9a5c7210ad6ddf7f31" Oct 9 01:01:59.061413 containerd[1474]: 2024-10-09 01:01:59.004 [INFO][4826] utils.go 188: Calico CNI releasing IP address ContainerID="98b08d0e294dc569debc0a079cdd08bc4658684d5776dc9a5c7210ad6ddf7f31" Oct 9 01:01:59.061413 containerd[1474]: 2024-10-09 01:01:59.034 [INFO][4832] ipam_plugin.go 417: Releasing address using handleID ContainerID="98b08d0e294dc569debc0a079cdd08bc4658684d5776dc9a5c7210ad6ddf7f31" HandleID="k8s-pod-network.98b08d0e294dc569debc0a079cdd08bc4658684d5776dc9a5c7210ad6ddf7f31" Workload="ci--4116.0.0--d--2a8a4ec573-k8s-coredns--6f6b679f8f--f6s92-eth0" Oct 9 01:01:59.061413 containerd[1474]: 2024-10-09 01:01:59.034 [INFO][4832] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:01:59.061413 containerd[1474]: 2024-10-09 01:01:59.034 [INFO][4832] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:01:59.061413 containerd[1474]: 2024-10-09 01:01:59.046 [WARNING][4832] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="98b08d0e294dc569debc0a079cdd08bc4658684d5776dc9a5c7210ad6ddf7f31" HandleID="k8s-pod-network.98b08d0e294dc569debc0a079cdd08bc4658684d5776dc9a5c7210ad6ddf7f31" Workload="ci--4116.0.0--d--2a8a4ec573-k8s-coredns--6f6b679f8f--f6s92-eth0" Oct 9 01:01:59.061413 containerd[1474]: 2024-10-09 01:01:59.046 [INFO][4832] ipam_plugin.go 445: Releasing address using workloadID ContainerID="98b08d0e294dc569debc0a079cdd08bc4658684d5776dc9a5c7210ad6ddf7f31" HandleID="k8s-pod-network.98b08d0e294dc569debc0a079cdd08bc4658684d5776dc9a5c7210ad6ddf7f31" Workload="ci--4116.0.0--d--2a8a4ec573-k8s-coredns--6f6b679f8f--f6s92-eth0" Oct 9 01:01:59.061413 containerd[1474]: 2024-10-09 01:01:59.050 [INFO][4832] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:01:59.061413 containerd[1474]: 2024-10-09 01:01:59.054 [INFO][4826] k8s.go 621: Teardown processing complete. ContainerID="98b08d0e294dc569debc0a079cdd08bc4658684d5776dc9a5c7210ad6ddf7f31" Oct 9 01:01:59.061413 containerd[1474]: time="2024-10-09T01:01:59.060035270Z" level=info msg="TearDown network for sandbox \"98b08d0e294dc569debc0a079cdd08bc4658684d5776dc9a5c7210ad6ddf7f31\" successfully" Oct 9 01:01:59.080846 containerd[1474]: time="2024-10-09T01:01:59.080779079Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"98b08d0e294dc569debc0a079cdd08bc4658684d5776dc9a5c7210ad6ddf7f31\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 01:01:59.080846 containerd[1474]: time="2024-10-09T01:01:59.080856224Z" level=info msg="RemovePodSandbox \"98b08d0e294dc569debc0a079cdd08bc4658684d5776dc9a5c7210ad6ddf7f31\" returns successfully" Oct 9 01:01:59.081622 containerd[1474]: time="2024-10-09T01:01:59.081587326Z" level=info msg="StopPodSandbox for \"6e7c3dec46c9feef948bdd48d45fb2c074d33970aa4adfa01666c39fcdf232b9\"" Oct 9 01:01:59.176067 containerd[1474]: 2024-10-09 01:01:59.131 [WARNING][4850] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6e7c3dec46c9feef948bdd48d45fb2c074d33970aa4adfa01666c39fcdf232b9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116.0.0--d--2a8a4ec573-k8s-coredns--6f6b679f8f--d95gg-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"104dbbd7-e31c-46d2-8ae4-2ce3a9ced8ae", ResourceVersion:"730", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 1, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116.0.0-d-2a8a4ec573", ContainerID:"7f9740f19e231205dc04574abcfd166a1429070af1fbc01c7d4032978d200ceb", Pod:"coredns-6f6b679f8f-d95gg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.13.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicf0d090d3d6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:01:59.176067 containerd[1474]: 2024-10-09 01:01:59.131 [INFO][4850] k8s.go 608: Cleaning up netns ContainerID="6e7c3dec46c9feef948bdd48d45fb2c074d33970aa4adfa01666c39fcdf232b9" Oct 9 01:01:59.176067 containerd[1474]: 2024-10-09 01:01:59.131 [INFO][4850] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="6e7c3dec46c9feef948bdd48d45fb2c074d33970aa4adfa01666c39fcdf232b9" iface="eth0" netns="" Oct 9 01:01:59.176067 containerd[1474]: 2024-10-09 01:01:59.131 [INFO][4850] k8s.go 615: Releasing IP address(es) ContainerID="6e7c3dec46c9feef948bdd48d45fb2c074d33970aa4adfa01666c39fcdf232b9" Oct 9 01:01:59.176067 containerd[1474]: 2024-10-09 01:01:59.131 [INFO][4850] utils.go 188: Calico CNI releasing IP address ContainerID="6e7c3dec46c9feef948bdd48d45fb2c074d33970aa4adfa01666c39fcdf232b9" Oct 9 01:01:59.176067 containerd[1474]: 2024-10-09 01:01:59.159 [INFO][4856] ipam_plugin.go 417: Releasing address using handleID ContainerID="6e7c3dec46c9feef948bdd48d45fb2c074d33970aa4adfa01666c39fcdf232b9" HandleID="k8s-pod-network.6e7c3dec46c9feef948bdd48d45fb2c074d33970aa4adfa01666c39fcdf232b9" Workload="ci--4116.0.0--d--2a8a4ec573-k8s-coredns--6f6b679f8f--d95gg-eth0" Oct 9 01:01:59.176067 containerd[1474]: 2024-10-09 01:01:59.159 [INFO][4856] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:01:59.176067 containerd[1474]: 2024-10-09 01:01:59.159 [INFO][4856] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:01:59.176067 containerd[1474]: 2024-10-09 01:01:59.170 [WARNING][4856] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="6e7c3dec46c9feef948bdd48d45fb2c074d33970aa4adfa01666c39fcdf232b9" HandleID="k8s-pod-network.6e7c3dec46c9feef948bdd48d45fb2c074d33970aa4adfa01666c39fcdf232b9" Workload="ci--4116.0.0--d--2a8a4ec573-k8s-coredns--6f6b679f8f--d95gg-eth0" Oct 9 01:01:59.176067 containerd[1474]: 2024-10-09 01:01:59.170 [INFO][4856] ipam_plugin.go 445: Releasing address using workloadID ContainerID="6e7c3dec46c9feef948bdd48d45fb2c074d33970aa4adfa01666c39fcdf232b9" HandleID="k8s-pod-network.6e7c3dec46c9feef948bdd48d45fb2c074d33970aa4adfa01666c39fcdf232b9" Workload="ci--4116.0.0--d--2a8a4ec573-k8s-coredns--6f6b679f8f--d95gg-eth0" Oct 9 01:01:59.176067 containerd[1474]: 2024-10-09 01:01:59.172 [INFO][4856] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:01:59.176067 containerd[1474]: 2024-10-09 01:01:59.174 [INFO][4850] k8s.go 621: Teardown processing complete. ContainerID="6e7c3dec46c9feef948bdd48d45fb2c074d33970aa4adfa01666c39fcdf232b9" Oct 9 01:01:59.176931 containerd[1474]: time="2024-10-09T01:01:59.176584078Z" level=info msg="TearDown network for sandbox \"6e7c3dec46c9feef948bdd48d45fb2c074d33970aa4adfa01666c39fcdf232b9\" successfully" Oct 9 01:01:59.176931 containerd[1474]: time="2024-10-09T01:01:59.176642202Z" level=info msg="StopPodSandbox for \"6e7c3dec46c9feef948bdd48d45fb2c074d33970aa4adfa01666c39fcdf232b9\" returns successfully" Oct 9 01:01:59.177545 containerd[1474]: time="2024-10-09T01:01:59.177507144Z" level=info msg="RemovePodSandbox for \"6e7c3dec46c9feef948bdd48d45fb2c074d33970aa4adfa01666c39fcdf232b9\"" Oct 9 01:01:59.177545 containerd[1474]: time="2024-10-09T01:01:59.177548146Z" level=info msg="Forcibly stopping sandbox \"6e7c3dec46c9feef948bdd48d45fb2c074d33970aa4adfa01666c39fcdf232b9\"" Oct 9 01:01:59.282304 containerd[1474]: 2024-10-09 01:01:59.237 [WARNING][4874] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6e7c3dec46c9feef948bdd48d45fb2c074d33970aa4adfa01666c39fcdf232b9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116.0.0--d--2a8a4ec573-k8s-coredns--6f6b679f8f--d95gg-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"104dbbd7-e31c-46d2-8ae4-2ce3a9ced8ae", ResourceVersion:"730", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 1, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116.0.0-d-2a8a4ec573", ContainerID:"7f9740f19e231205dc04574abcfd166a1429070af1fbc01c7d4032978d200ceb", Pod:"coredns-6f6b679f8f-d95gg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.13.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicf0d090d3d6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:01:59.282304 containerd[1474]: 2024-10-09 01:01:59.237 [INFO][4874] k8s.go 608: Cleaning up netns ContainerID="6e7c3dec46c9feef948bdd48d45fb2c074d33970aa4adfa01666c39fcdf232b9" Oct 9 01:01:59.282304 containerd[1474]: 2024-10-09 01:01:59.237 [INFO][4874] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="6e7c3dec46c9feef948bdd48d45fb2c074d33970aa4adfa01666c39fcdf232b9" iface="eth0" netns="" Oct 9 01:01:59.282304 containerd[1474]: 2024-10-09 01:01:59.237 [INFO][4874] k8s.go 615: Releasing IP address(es) ContainerID="6e7c3dec46c9feef948bdd48d45fb2c074d33970aa4adfa01666c39fcdf232b9" Oct 9 01:01:59.282304 containerd[1474]: 2024-10-09 01:01:59.237 [INFO][4874] utils.go 188: Calico CNI releasing IP address ContainerID="6e7c3dec46c9feef948bdd48d45fb2c074d33970aa4adfa01666c39fcdf232b9" Oct 9 01:01:59.282304 containerd[1474]: 2024-10-09 01:01:59.266 [INFO][4880] ipam_plugin.go 417: Releasing address using handleID ContainerID="6e7c3dec46c9feef948bdd48d45fb2c074d33970aa4adfa01666c39fcdf232b9" HandleID="k8s-pod-network.6e7c3dec46c9feef948bdd48d45fb2c074d33970aa4adfa01666c39fcdf232b9" Workload="ci--4116.0.0--d--2a8a4ec573-k8s-coredns--6f6b679f8f--d95gg-eth0" Oct 9 01:01:59.282304 containerd[1474]: 2024-10-09 01:01:59.266 [INFO][4880] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:01:59.282304 containerd[1474]: 2024-10-09 01:01:59.266 [INFO][4880] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:01:59.282304 containerd[1474]: 2024-10-09 01:01:59.273 [WARNING][4880] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="6e7c3dec46c9feef948bdd48d45fb2c074d33970aa4adfa01666c39fcdf232b9" HandleID="k8s-pod-network.6e7c3dec46c9feef948bdd48d45fb2c074d33970aa4adfa01666c39fcdf232b9" Workload="ci--4116.0.0--d--2a8a4ec573-k8s-coredns--6f6b679f8f--d95gg-eth0" Oct 9 01:01:59.282304 containerd[1474]: 2024-10-09 01:01:59.273 [INFO][4880] ipam_plugin.go 445: Releasing address using workloadID ContainerID="6e7c3dec46c9feef948bdd48d45fb2c074d33970aa4adfa01666c39fcdf232b9" HandleID="k8s-pod-network.6e7c3dec46c9feef948bdd48d45fb2c074d33970aa4adfa01666c39fcdf232b9" Workload="ci--4116.0.0--d--2a8a4ec573-k8s-coredns--6f6b679f8f--d95gg-eth0" Oct 9 01:01:59.282304 containerd[1474]: 2024-10-09 01:01:59.277 [INFO][4880] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:01:59.282304 containerd[1474]: 2024-10-09 01:01:59.280 [INFO][4874] k8s.go 621: Teardown processing complete. ContainerID="6e7c3dec46c9feef948bdd48d45fb2c074d33970aa4adfa01666c39fcdf232b9" Oct 9 01:01:59.282304 containerd[1474]: time="2024-10-09T01:01:59.282127344Z" level=info msg="TearDown network for sandbox \"6e7c3dec46c9feef948bdd48d45fb2c074d33970aa4adfa01666c39fcdf232b9\" successfully" Oct 9 01:01:59.287071 containerd[1474]: time="2024-10-09T01:01:59.286885620Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6e7c3dec46c9feef948bdd48d45fb2c074d33970aa4adfa01666c39fcdf232b9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 01:01:59.287071 containerd[1474]: time="2024-10-09T01:01:59.287068598Z" level=info msg="RemovePodSandbox \"6e7c3dec46c9feef948bdd48d45fb2c074d33970aa4adfa01666c39fcdf232b9\" returns successfully" Oct 9 01:01:59.288029 containerd[1474]: time="2024-10-09T01:01:59.287841074Z" level=info msg="StopPodSandbox for \"878f1c5b627ec68e440fed59fff778ab9002a0799f2ad44383463214bcb08686\"" Oct 9 01:01:59.382969 containerd[1474]: 2024-10-09 01:01:59.335 [WARNING][4898] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="878f1c5b627ec68e440fed59fff778ab9002a0799f2ad44383463214bcb08686" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116.0.0--d--2a8a4ec573-k8s-calico--kube--controllers--5d99d68f9d--swq27-eth0", GenerateName:"calico-kube-controllers-5d99d68f9d-", Namespace:"calico-system", SelfLink:"", UID:"c458a646-0669-47f2-97fc-a34bf29c9bc5", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 1, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d99d68f9d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116.0.0-d-2a8a4ec573", ContainerID:"25561639e7d2bcca69641b20081cdfceb9970693f919094aaddbb443a311ecf1", Pod:"calico-kube-controllers-5d99d68f9d-swq27", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.13.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5d57f287b15", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:01:59.382969 containerd[1474]: 2024-10-09 01:01:59.335 [INFO][4898] k8s.go 608: Cleaning up netns ContainerID="878f1c5b627ec68e440fed59fff778ab9002a0799f2ad44383463214bcb08686" Oct 9 01:01:59.382969 containerd[1474]: 2024-10-09 01:01:59.335 [INFO][4898] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="878f1c5b627ec68e440fed59fff778ab9002a0799f2ad44383463214bcb08686" iface="eth0" netns="" Oct 9 01:01:59.382969 containerd[1474]: 2024-10-09 01:01:59.335 [INFO][4898] k8s.go 615: Releasing IP address(es) ContainerID="878f1c5b627ec68e440fed59fff778ab9002a0799f2ad44383463214bcb08686" Oct 9 01:01:59.382969 containerd[1474]: 2024-10-09 01:01:59.335 [INFO][4898] utils.go 188: Calico CNI releasing IP address ContainerID="878f1c5b627ec68e440fed59fff778ab9002a0799f2ad44383463214bcb08686" Oct 9 01:01:59.382969 containerd[1474]: 2024-10-09 01:01:59.369 [INFO][4904] ipam_plugin.go 417: Releasing address using handleID ContainerID="878f1c5b627ec68e440fed59fff778ab9002a0799f2ad44383463214bcb08686" HandleID="k8s-pod-network.878f1c5b627ec68e440fed59fff778ab9002a0799f2ad44383463214bcb08686" Workload="ci--4116.0.0--d--2a8a4ec573-k8s-calico--kube--controllers--5d99d68f9d--swq27-eth0" Oct 9 01:01:59.382969 containerd[1474]: 2024-10-09 01:01:59.369 [INFO][4904] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:01:59.382969 containerd[1474]: 2024-10-09 01:01:59.369 [INFO][4904] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:01:59.382969 containerd[1474]: 2024-10-09 01:01:59.376 [WARNING][4904] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="878f1c5b627ec68e440fed59fff778ab9002a0799f2ad44383463214bcb08686" HandleID="k8s-pod-network.878f1c5b627ec68e440fed59fff778ab9002a0799f2ad44383463214bcb08686" Workload="ci--4116.0.0--d--2a8a4ec573-k8s-calico--kube--controllers--5d99d68f9d--swq27-eth0" Oct 9 01:01:59.382969 containerd[1474]: 2024-10-09 01:01:59.377 [INFO][4904] ipam_plugin.go 445: Releasing address using workloadID ContainerID="878f1c5b627ec68e440fed59fff778ab9002a0799f2ad44383463214bcb08686" HandleID="k8s-pod-network.878f1c5b627ec68e440fed59fff778ab9002a0799f2ad44383463214bcb08686" Workload="ci--4116.0.0--d--2a8a4ec573-k8s-calico--kube--controllers--5d99d68f9d--swq27-eth0" Oct 9 01:01:59.382969 containerd[1474]: 2024-10-09 01:01:59.379 [INFO][4904] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:01:59.382969 containerd[1474]: 2024-10-09 01:01:59.381 [INFO][4898] k8s.go 621: Teardown processing complete. ContainerID="878f1c5b627ec68e440fed59fff778ab9002a0799f2ad44383463214bcb08686" Oct 9 01:01:59.384542 containerd[1474]: time="2024-10-09T01:01:59.383072338Z" level=info msg="TearDown network for sandbox \"878f1c5b627ec68e440fed59fff778ab9002a0799f2ad44383463214bcb08686\" successfully" Oct 9 01:01:59.384542 containerd[1474]: time="2024-10-09T01:01:59.383103727Z" level=info msg="StopPodSandbox for \"878f1c5b627ec68e440fed59fff778ab9002a0799f2ad44383463214bcb08686\" returns successfully" Oct 9 01:01:59.384542 containerd[1474]: time="2024-10-09T01:01:59.384086068Z" level=info msg="RemovePodSandbox for \"878f1c5b627ec68e440fed59fff778ab9002a0799f2ad44383463214bcb08686\"" Oct 9 01:01:59.384542 containerd[1474]: time="2024-10-09T01:01:59.384192741Z" level=info msg="Forcibly stopping sandbox \"878f1c5b627ec68e440fed59fff778ab9002a0799f2ad44383463214bcb08686\"" Oct 9 01:01:59.475496 containerd[1474]: 2024-10-09 01:01:59.434 [WARNING][4922] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="878f1c5b627ec68e440fed59fff778ab9002a0799f2ad44383463214bcb08686" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116.0.0--d--2a8a4ec573-k8s-calico--kube--controllers--5d99d68f9d--swq27-eth0", GenerateName:"calico-kube-controllers-5d99d68f9d-", Namespace:"calico-system", SelfLink:"", UID:"c458a646-0669-47f2-97fc-a34bf29c9bc5", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 1, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d99d68f9d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116.0.0-d-2a8a4ec573", ContainerID:"25561639e7d2bcca69641b20081cdfceb9970693f919094aaddbb443a311ecf1", Pod:"calico-kube-controllers-5d99d68f9d-swq27", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.13.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5d57f287b15", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:01:59.475496 containerd[1474]: 2024-10-09 01:01:59.435 [INFO][4922] k8s.go 608: Cleaning up netns ContainerID="878f1c5b627ec68e440fed59fff778ab9002a0799f2ad44383463214bcb08686" Oct 9 01:01:59.475496 containerd[1474]: 2024-10-09 01:01:59.435 [INFO][4922] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="878f1c5b627ec68e440fed59fff778ab9002a0799f2ad44383463214bcb08686" iface="eth0" netns="" Oct 9 01:01:59.475496 containerd[1474]: 2024-10-09 01:01:59.435 [INFO][4922] k8s.go 615: Releasing IP address(es) ContainerID="878f1c5b627ec68e440fed59fff778ab9002a0799f2ad44383463214bcb08686" Oct 9 01:01:59.475496 containerd[1474]: 2024-10-09 01:01:59.435 [INFO][4922] utils.go 188: Calico CNI releasing IP address ContainerID="878f1c5b627ec68e440fed59fff778ab9002a0799f2ad44383463214bcb08686" Oct 9 01:01:59.475496 containerd[1474]: 2024-10-09 01:01:59.460 [INFO][4928] ipam_plugin.go 417: Releasing address using handleID ContainerID="878f1c5b627ec68e440fed59fff778ab9002a0799f2ad44383463214bcb08686" HandleID="k8s-pod-network.878f1c5b627ec68e440fed59fff778ab9002a0799f2ad44383463214bcb08686" Workload="ci--4116.0.0--d--2a8a4ec573-k8s-calico--kube--controllers--5d99d68f9d--swq27-eth0" Oct 9 01:01:59.475496 containerd[1474]: 2024-10-09 01:01:59.460 [INFO][4928] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:01:59.475496 containerd[1474]: 2024-10-09 01:01:59.460 [INFO][4928] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:01:59.475496 containerd[1474]: 2024-10-09 01:01:59.468 [WARNING][4928] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="878f1c5b627ec68e440fed59fff778ab9002a0799f2ad44383463214bcb08686" HandleID="k8s-pod-network.878f1c5b627ec68e440fed59fff778ab9002a0799f2ad44383463214bcb08686" Workload="ci--4116.0.0--d--2a8a4ec573-k8s-calico--kube--controllers--5d99d68f9d--swq27-eth0" Oct 9 01:01:59.475496 containerd[1474]: 2024-10-09 01:01:59.468 [INFO][4928] ipam_plugin.go 445: Releasing address using workloadID ContainerID="878f1c5b627ec68e440fed59fff778ab9002a0799f2ad44383463214bcb08686" HandleID="k8s-pod-network.878f1c5b627ec68e440fed59fff778ab9002a0799f2ad44383463214bcb08686" Workload="ci--4116.0.0--d--2a8a4ec573-k8s-calico--kube--controllers--5d99d68f9d--swq27-eth0" Oct 9 01:01:59.475496 containerd[1474]: 2024-10-09 01:01:59.471 [INFO][4928] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:01:59.475496 containerd[1474]: 2024-10-09 01:01:59.473 [INFO][4922] k8s.go 621: Teardown processing complete. ContainerID="878f1c5b627ec68e440fed59fff778ab9002a0799f2ad44383463214bcb08686" Oct 9 01:01:59.475496 containerd[1474]: time="2024-10-09T01:01:59.475213288Z" level=info msg="TearDown network for sandbox \"878f1c5b627ec68e440fed59fff778ab9002a0799f2ad44383463214bcb08686\" successfully" Oct 9 01:01:59.486951 containerd[1474]: time="2024-10-09T01:01:59.486897057Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"878f1c5b627ec68e440fed59fff778ab9002a0799f2ad44383463214bcb08686\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 01:01:59.486951 containerd[1474]: time="2024-10-09T01:01:59.487062930Z" level=info msg="RemovePodSandbox \"878f1c5b627ec68e440fed59fff778ab9002a0799f2ad44383463214bcb08686\" returns successfully" Oct 9 01:02:02.208960 systemd[1]: Started sshd@11-165.232.149.110:22-139.178.68.195:53180.service - OpenSSH per-connection server daemon (139.178.68.195:53180). Oct 9 01:02:02.327551 sshd[4935]: Accepted publickey for core from 139.178.68.195 port 53180 ssh2: RSA SHA256:rAraWF6dAhtbVQzAuCRwvYKxEoENakeAe95MuXIlOkk Oct 9 01:02:02.330788 sshd[4935]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:02:02.341048 systemd-logind[1456]: New session 12 of user core. Oct 9 01:02:02.355926 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 9 01:02:02.677259 sshd[4935]: pam_unix(sshd:session): session closed for user core Oct 9 01:02:02.692464 systemd-logind[1456]: Session 12 logged out. Waiting for processes to exit. Oct 9 01:02:02.696760 systemd[1]: sshd@11-165.232.149.110:22-139.178.68.195:53180.service: Deactivated successfully. Oct 9 01:02:02.701118 systemd[1]: session-12.scope: Deactivated successfully. Oct 9 01:02:02.703254 systemd-logind[1456]: Removed session 12. Oct 9 01:02:07.698618 systemd[1]: Started sshd@12-165.232.149.110:22-139.178.68.195:53194.service - OpenSSH per-connection server daemon (139.178.68.195:53194). Oct 9 01:02:07.760552 sshd[4963]: Accepted publickey for core from 139.178.68.195 port 53194 ssh2: RSA SHA256:rAraWF6dAhtbVQzAuCRwvYKxEoENakeAe95MuXIlOkk Oct 9 01:02:07.762818 sshd[4963]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:02:07.769987 systemd-logind[1456]: New session 13 of user core. Oct 9 01:02:07.777739 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 9 01:02:07.956978 sshd[4963]: pam_unix(sshd:session): session closed for user core Oct 9 01:02:07.974904 systemd[1]: sshd@12-165.232.149.110:22-139.178.68.195:53194.service: Deactivated successfully. Oct 9 01:02:07.978818 systemd[1]: session-13.scope: Deactivated successfully. Oct 9 01:02:07.981563 systemd-logind[1456]: Session 13 logged out. Waiting for processes to exit. Oct 9 01:02:07.996007 systemd[1]: Started sshd@13-165.232.149.110:22-139.178.68.195:53196.service - OpenSSH per-connection server daemon (139.178.68.195:53196). Oct 9 01:02:07.998272 systemd-logind[1456]: Removed session 13. Oct 9 01:02:08.061014 sshd[4977]: Accepted publickey for core from 139.178.68.195 port 53196 ssh2: RSA SHA256:rAraWF6dAhtbVQzAuCRwvYKxEoENakeAe95MuXIlOkk Oct 9 01:02:08.063461 sshd[4977]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:02:08.070203 systemd-logind[1456]: New session 14 of user core. Oct 9 01:02:08.079730 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 9 01:02:08.302687 sshd[4977]: pam_unix(sshd:session): session closed for user core Oct 9 01:02:08.317416 systemd[1]: sshd@13-165.232.149.110:22-139.178.68.195:53196.service: Deactivated successfully. Oct 9 01:02:08.324328 systemd[1]: session-14.scope: Deactivated successfully. Oct 9 01:02:08.327161 systemd-logind[1456]: Session 14 logged out. Waiting for processes to exit. Oct 9 01:02:08.340989 systemd[1]: Started sshd@14-165.232.149.110:22-139.178.68.195:53198.service - OpenSSH per-connection server daemon (139.178.68.195:53198). Oct 9 01:02:08.345155 systemd-logind[1456]: Removed session 14. Oct 9 01:02:08.418484 sshd[4988]: Accepted publickey for core from 139.178.68.195 port 53198 ssh2: RSA SHA256:rAraWF6dAhtbVQzAuCRwvYKxEoENakeAe95MuXIlOkk Oct 9 01:02:08.421216 sshd[4988]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:02:08.428528 systemd-logind[1456]: New session 15 of user core. Oct 9 01:02:08.432682 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 9 01:02:08.605892 sshd[4988]: pam_unix(sshd:session): session closed for user core Oct 9 01:02:08.613242 systemd-logind[1456]: Session 15 logged out. Waiting for processes to exit. Oct 9 01:02:08.615996 systemd[1]: sshd@14-165.232.149.110:22-139.178.68.195:53198.service: Deactivated successfully. Oct 9 01:02:08.621979 systemd[1]: session-15.scope: Deactivated successfully. Oct 9 01:02:08.624357 systemd-logind[1456]: Removed session 15. Oct 9 01:02:13.625994 systemd[1]: Started sshd@15-165.232.149.110:22-139.178.68.195:38298.service - OpenSSH per-connection server daemon (139.178.68.195:38298). Oct 9 01:02:13.687575 sshd[5012]: Accepted publickey for core from 139.178.68.195 port 38298 ssh2: RSA SHA256:rAraWF6dAhtbVQzAuCRwvYKxEoENakeAe95MuXIlOkk Oct 9 01:02:13.689842 sshd[5012]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:02:13.697358 systemd-logind[1456]: New session 16 of user core. Oct 9 01:02:13.705788 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 9 01:02:13.856772 sshd[5012]: pam_unix(sshd:session): session closed for user core Oct 9 01:02:13.861573 systemd-logind[1456]: Session 16 logged out. Waiting for processes to exit. Oct 9 01:02:13.861887 systemd[1]: sshd@15-165.232.149.110:22-139.178.68.195:38298.service: Deactivated successfully. Oct 9 01:02:13.865326 systemd[1]: session-16.scope: Deactivated successfully. Oct 9 01:02:13.870418 systemd-logind[1456]: Removed session 16. Oct 9 01:02:18.149188 systemd[1]: run-containerd-runc-k8s.io-acf278550e19c496c7b63aa6c42c233c7f83d39995ccb69bee83ec8bb01824ce-runc.G4C5k5.mount: Deactivated successfully. Oct 9 01:02:18.883604 systemd[1]: Started sshd@16-165.232.149.110:22-139.178.68.195:38310.service - OpenSSH per-connection server daemon (139.178.68.195:38310). Oct 9 01:02:18.994516 sshd[5051]: Accepted publickey for core from 139.178.68.195 port 38310 ssh2: RSA SHA256:rAraWF6dAhtbVQzAuCRwvYKxEoENakeAe95MuXIlOkk Oct 9 01:02:19.000865 sshd[5051]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:02:19.024833 systemd-logind[1456]: New session 17 of user core. Oct 9 01:02:19.032098 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 9 01:02:19.427349 sshd[5051]: pam_unix(sshd:session): session closed for user core Oct 9 01:02:19.435996 systemd[1]: sshd@16-165.232.149.110:22-139.178.68.195:38310.service: Deactivated successfully. Oct 9 01:02:19.442599 systemd[1]: session-17.scope: Deactivated successfully. Oct 9 01:02:19.448590 systemd-logind[1456]: Session 17 logged out. Waiting for processes to exit. Oct 9 01:02:19.451735 systemd-logind[1456]: Removed session 17. Oct 9 01:02:24.445088 systemd[1]: Started sshd@17-165.232.149.110:22-139.178.68.195:52646.service - OpenSSH per-connection server daemon (139.178.68.195:52646). Oct 9 01:02:24.528248 sshd[5098]: Accepted publickey for core from 139.178.68.195 port 52646 ssh2: RSA SHA256:rAraWF6dAhtbVQzAuCRwvYKxEoENakeAe95MuXIlOkk Oct 9 01:02:24.533695 sshd[5098]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:02:24.545898 systemd-logind[1456]: New session 18 of user core. Oct 9 01:02:24.552004 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 9 01:02:24.803812 sshd[5098]: pam_unix(sshd:session): session closed for user core Oct 9 01:02:24.811648 systemd[1]: sshd@17-165.232.149.110:22-139.178.68.195:52646.service: Deactivated successfully. Oct 9 01:02:24.816454 systemd[1]: session-18.scope: Deactivated successfully. Oct 9 01:02:24.819346 systemd-logind[1456]: Session 18 logged out. Waiting for processes to exit. Oct 9 01:02:24.820966 systemd-logind[1456]: Removed session 18. Oct 9 01:02:26.602027 kubelet[2577]: E1009 01:02:26.601663 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 01:02:26.819006 systemd[1]: Created slice kubepods-besteffort-podd412c36b_c99d_4516_bf0a_9d6d34460084.slice - libcontainer container kubepods-besteffort-podd412c36b_c99d_4516_bf0a_9d6d34460084.slice. Oct 9 01:02:26.877202 kubelet[2577]: I1009 01:02:26.876900 2577 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmh6j\" (UniqueName: \"kubernetes.io/projected/d412c36b-c99d-4516-bf0a-9d6d34460084-kube-api-access-wmh6j\") pod \"calico-apiserver-868dcb9594-99hql\" (UID: \"d412c36b-c99d-4516-bf0a-9d6d34460084\") " pod="calico-apiserver/calico-apiserver-868dcb9594-99hql" Oct 9 01:02:26.877202 kubelet[2577]: I1009 01:02:26.877106 2577 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d412c36b-c99d-4516-bf0a-9d6d34460084-calico-apiserver-certs\") pod \"calico-apiserver-868dcb9594-99hql\" (UID: \"d412c36b-c99d-4516-bf0a-9d6d34460084\") " pod="calico-apiserver/calico-apiserver-868dcb9594-99hql" Oct 9 01:02:26.981038 kubelet[2577]: E1009 01:02:26.980949 2577 secret.go:188] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Oct 9 01:02:27.002941 kubelet[2577]: E1009 01:02:27.002705 2577 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d412c36b-c99d-4516-bf0a-9d6d34460084-calico-apiserver-certs podName:d412c36b-c99d-4516-bf0a-9d6d34460084 nodeName:}" failed. No retries permitted until 2024-10-09 01:02:27.481118325 +0000 UTC m=+89.014951834 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/d412c36b-c99d-4516-bf0a-9d6d34460084-calico-apiserver-certs") pod "calico-apiserver-868dcb9594-99hql" (UID: "d412c36b-c99d-4516-bf0a-9d6d34460084") : secret "calico-apiserver-certs" not found Oct 9 01:02:27.729988 containerd[1474]: time="2024-10-09T01:02:27.729943656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-868dcb9594-99hql,Uid:d412c36b-c99d-4516-bf0a-9d6d34460084,Namespace:calico-apiserver,Attempt:0,}" Oct 9 01:02:27.983845 systemd-networkd[1378]: cali5ef05a42018: Link UP Oct 9 01:02:27.984938 systemd-networkd[1378]: cali5ef05a42018: Gained carrier Oct 9 01:02:28.012804 containerd[1474]: 2024-10-09 01:02:27.829 [INFO][5136] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4116.0.0--d--2a8a4ec573-k8s-calico--apiserver--868dcb9594--99hql-eth0 calico-apiserver-868dcb9594- calico-apiserver d412c36b-c99d-4516-bf0a-9d6d34460084 1068 0 2024-10-09 01:02:26 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:868dcb9594 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4116.0.0-d-2a8a4ec573 calico-apiserver-868dcb9594-99hql eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5ef05a42018 [] []}} ContainerID="424a635955575d010fed87e32ce96383916d45a2f3e21a376915f6d834d0cce6" Namespace="calico-apiserver" Pod="calico-apiserver-868dcb9594-99hql" WorkloadEndpoint="ci--4116.0.0--d--2a8a4ec573-k8s-calico--apiserver--868dcb9594--99hql-" Oct 9 01:02:28.012804 containerd[1474]: 2024-10-09 01:02:27.829 [INFO][5136] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="424a635955575d010fed87e32ce96383916d45a2f3e21a376915f6d834d0cce6" Namespace="calico-apiserver" Pod="calico-apiserver-868dcb9594-99hql" WorkloadEndpoint="ci--4116.0.0--d--2a8a4ec573-k8s-calico--apiserver--868dcb9594--99hql-eth0" Oct 9 01:02:28.012804 containerd[1474]: 2024-10-09 01:02:27.883 [INFO][5148] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="424a635955575d010fed87e32ce96383916d45a2f3e21a376915f6d834d0cce6" HandleID="k8s-pod-network.424a635955575d010fed87e32ce96383916d45a2f3e21a376915f6d834d0cce6" Workload="ci--4116.0.0--d--2a8a4ec573-k8s-calico--apiserver--868dcb9594--99hql-eth0" Oct 9 01:02:28.012804 containerd[1474]: 2024-10-09 01:02:27.900 [INFO][5148] ipam_plugin.go 270: Auto assigning IP ContainerID="424a635955575d010fed87e32ce96383916d45a2f3e21a376915f6d834d0cce6" HandleID="k8s-pod-network.424a635955575d010fed87e32ce96383916d45a2f3e21a376915f6d834d0cce6" Workload="ci--4116.0.0--d--2a8a4ec573-k8s-calico--apiserver--868dcb9594--99hql-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000201490), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4116.0.0-d-2a8a4ec573", "pod":"calico-apiserver-868dcb9594-99hql", "timestamp":"2024-10-09 01:02:27.883094885 +0000 UTC"}, Hostname:"ci-4116.0.0-d-2a8a4ec573", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 01:02:28.012804 containerd[1474]: 2024-10-09 01:02:27.900 [INFO][5148] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:02:28.012804 containerd[1474]: 2024-10-09 01:02:27.900 [INFO][5148] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:02:28.012804 containerd[1474]: 2024-10-09 01:02:27.900 [INFO][5148] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4116.0.0-d-2a8a4ec573' Oct 9 01:02:28.012804 containerd[1474]: 2024-10-09 01:02:27.913 [INFO][5148] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.424a635955575d010fed87e32ce96383916d45a2f3e21a376915f6d834d0cce6" host="ci-4116.0.0-d-2a8a4ec573" Oct 9 01:02:28.012804 containerd[1474]: 2024-10-09 01:02:27.923 [INFO][5148] ipam.go 372: Looking up existing affinities for host host="ci-4116.0.0-d-2a8a4ec573" Oct 9 01:02:28.012804 containerd[1474]: 2024-10-09 01:02:27.935 [INFO][5148] ipam.go 489: Trying affinity for 192.168.13.64/26 host="ci-4116.0.0-d-2a8a4ec573" Oct 9 01:02:28.012804 containerd[1474]: 2024-10-09 01:02:27.938 [INFO][5148] ipam.go 155: Attempting to load block cidr=192.168.13.64/26 host="ci-4116.0.0-d-2a8a4ec573" Oct 9 01:02:28.012804 containerd[1474]: 2024-10-09 01:02:27.944 [INFO][5148] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.13.64/26 host="ci-4116.0.0-d-2a8a4ec573" Oct 9 01:02:28.012804 containerd[1474]: 2024-10-09 01:02:27.944 [INFO][5148] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.13.64/26 handle="k8s-pod-network.424a635955575d010fed87e32ce96383916d45a2f3e21a376915f6d834d0cce6" host="ci-4116.0.0-d-2a8a4ec573" Oct 9 01:02:28.012804 containerd[1474]: 2024-10-09 01:02:27.952 [INFO][5148] ipam.go 1685: Creating new handle: k8s-pod-network.424a635955575d010fed87e32ce96383916d45a2f3e21a376915f6d834d0cce6 Oct 9 01:02:28.012804 containerd[1474]: 2024-10-09 01:02:27.959 [INFO][5148] ipam.go 1203: Writing block in order to claim IPs block=192.168.13.64/26 handle="k8s-pod-network.424a635955575d010fed87e32ce96383916d45a2f3e21a376915f6d834d0cce6" host="ci-4116.0.0-d-2a8a4ec573" Oct 9 01:02:28.012804 containerd[1474]: 2024-10-09 01:02:27.969 [INFO][5148] ipam.go 1216: Successfully claimed IPs: [192.168.13.69/26] block=192.168.13.64/26 handle="k8s-pod-network.424a635955575d010fed87e32ce96383916d45a2f3e21a376915f6d834d0cce6" host="ci-4116.0.0-d-2a8a4ec573" Oct 9 01:02:28.012804 containerd[1474]: 2024-10-09 01:02:27.969 [INFO][5148] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.13.69/26] handle="k8s-pod-network.424a635955575d010fed87e32ce96383916d45a2f3e21a376915f6d834d0cce6" host="ci-4116.0.0-d-2a8a4ec573" Oct 9 01:02:28.012804 containerd[1474]: 2024-10-09 01:02:27.969 [INFO][5148] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:02:28.012804 containerd[1474]: 2024-10-09 01:02:27.969 [INFO][5148] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.13.69/26] IPv6=[] ContainerID="424a635955575d010fed87e32ce96383916d45a2f3e21a376915f6d834d0cce6" HandleID="k8s-pod-network.424a635955575d010fed87e32ce96383916d45a2f3e21a376915f6d834d0cce6" Workload="ci--4116.0.0--d--2a8a4ec573-k8s-calico--apiserver--868dcb9594--99hql-eth0" Oct 9 01:02:28.015596 containerd[1474]: 2024-10-09 01:02:27.974 [INFO][5136] k8s.go 386: Populated endpoint ContainerID="424a635955575d010fed87e32ce96383916d45a2f3e21a376915f6d834d0cce6" Namespace="calico-apiserver" Pod="calico-apiserver-868dcb9594-99hql" WorkloadEndpoint="ci--4116.0.0--d--2a8a4ec573-k8s-calico--apiserver--868dcb9594--99hql-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116.0.0--d--2a8a4ec573-k8s-calico--apiserver--868dcb9594--99hql-eth0", GenerateName:"calico-apiserver-868dcb9594-", Namespace:"calico-apiserver", SelfLink:"", UID:"d412c36b-c99d-4516-bf0a-9d6d34460084", ResourceVersion:"1068", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 2, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"868dcb9594", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116.0.0-d-2a8a4ec573", ContainerID:"", Pod:"calico-apiserver-868dcb9594-99hql", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.13.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5ef05a42018", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:02:28.015596 containerd[1474]: 2024-10-09 01:02:27.975 [INFO][5136] k8s.go 387: Calico CNI using IPs: [192.168.13.69/32] ContainerID="424a635955575d010fed87e32ce96383916d45a2f3e21a376915f6d834d0cce6" Namespace="calico-apiserver" Pod="calico-apiserver-868dcb9594-99hql" WorkloadEndpoint="ci--4116.0.0--d--2a8a4ec573-k8s-calico--apiserver--868dcb9594--99hql-eth0" Oct 9 01:02:28.015596 containerd[1474]: 2024-10-09 01:02:27.975 [INFO][5136] dataplane_linux.go 68: Setting the host side veth name to cali5ef05a42018 ContainerID="424a635955575d010fed87e32ce96383916d45a2f3e21a376915f6d834d0cce6" Namespace="calico-apiserver" Pod="calico-apiserver-868dcb9594-99hql" WorkloadEndpoint="ci--4116.0.0--d--2a8a4ec573-k8s-calico--apiserver--868dcb9594--99hql-eth0" Oct 9 01:02:28.015596 containerd[1474]: 2024-10-09 01:02:27.984 [INFO][5136] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="424a635955575d010fed87e32ce96383916d45a2f3e21a376915f6d834d0cce6" Namespace="calico-apiserver" Pod="calico-apiserver-868dcb9594-99hql" WorkloadEndpoint="ci--4116.0.0--d--2a8a4ec573-k8s-calico--apiserver--868dcb9594--99hql-eth0" Oct 9 01:02:28.015596 containerd[1474]: 2024-10-09 01:02:27.985 [INFO][5136] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="424a635955575d010fed87e32ce96383916d45a2f3e21a376915f6d834d0cce6" Namespace="calico-apiserver" Pod="calico-apiserver-868dcb9594-99hql" WorkloadEndpoint="ci--4116.0.0--d--2a8a4ec573-k8s-calico--apiserver--868dcb9594--99hql-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116.0.0--d--2a8a4ec573-k8s-calico--apiserver--868dcb9594--99hql-eth0", GenerateName:"calico-apiserver-868dcb9594-", Namespace:"calico-apiserver", SelfLink:"", UID:"d412c36b-c99d-4516-bf0a-9d6d34460084", ResourceVersion:"1068", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 2, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"868dcb9594", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116.0.0-d-2a8a4ec573", ContainerID:"424a635955575d010fed87e32ce96383916d45a2f3e21a376915f6d834d0cce6", Pod:"calico-apiserver-868dcb9594-99hql", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.13.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5ef05a42018", MAC:"5e:90:2d:f0:5e:38", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:02:28.015596 containerd[1474]: 2024-10-09 01:02:28.001 [INFO][5136] k8s.go 500: Wrote updated endpoint to datastore ContainerID="424a635955575d010fed87e32ce96383916d45a2f3e21a376915f6d834d0cce6" Namespace="calico-apiserver" Pod="calico-apiserver-868dcb9594-99hql" WorkloadEndpoint="ci--4116.0.0--d--2a8a4ec573-k8s-calico--apiserver--868dcb9594--99hql-eth0" Oct 9 01:02:28.084063 containerd[1474]: time="2024-10-09T01:02:28.080867588Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:02:28.084063 containerd[1474]: time="2024-10-09T01:02:28.083565090Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:02:28.084063 containerd[1474]: time="2024-10-09T01:02:28.083602736Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:02:28.084063 containerd[1474]: time="2024-10-09T01:02:28.083805135Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:02:28.164188 systemd[1]: Started cri-containerd-424a635955575d010fed87e32ce96383916d45a2f3e21a376915f6d834d0cce6.scope - libcontainer container 424a635955575d010fed87e32ce96383916d45a2f3e21a376915f6d834d0cce6. Oct 9 01:02:28.260834 containerd[1474]: time="2024-10-09T01:02:28.260660459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-868dcb9594-99hql,Uid:d412c36b-c99d-4516-bf0a-9d6d34460084,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"424a635955575d010fed87e32ce96383916d45a2f3e21a376915f6d834d0cce6\"" Oct 9 01:02:28.266157 containerd[1474]: time="2024-10-09T01:02:28.265877621Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Oct 9 01:02:29.230093 systemd-networkd[1378]: cali5ef05a42018: Gained IPv6LL Oct 9 01:02:29.841962 systemd[1]: Started sshd@18-165.232.149.110:22-139.178.68.195:52650.service - OpenSSH per-connection server daemon (139.178.68.195:52650). Oct 9 01:02:30.026373 sshd[5211]: Accepted publickey for core from 139.178.68.195 port 52650 ssh2: RSA SHA256:rAraWF6dAhtbVQzAuCRwvYKxEoENakeAe95MuXIlOkk Oct 9 01:02:30.034054 sshd[5211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:02:30.045420 systemd-logind[1456]: New session 19 of user core. Oct 9 01:02:30.051765 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 9 01:02:30.518820 sshd[5211]: pam_unix(sshd:session): session closed for user core Oct 9 01:02:30.526501 systemd-logind[1456]: Session 19 logged out. Waiting for processes to exit. Oct 9 01:02:30.527496 systemd[1]: sshd@18-165.232.149.110:22-139.178.68.195:52650.service: Deactivated successfully. Oct 9 01:02:30.540003 systemd[1]: session-19.scope: Deactivated successfully. Oct 9 01:02:30.549058 systemd-logind[1456]: Removed session 19. Oct 9 01:02:30.617036 kubelet[2577]: E1009 01:02:30.616836 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 01:02:31.641987 containerd[1474]: time="2024-10-09T01:02:31.641138809Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:02:31.646414 containerd[1474]: time="2024-10-09T01:02:31.645968135Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=40419849" Oct 9 01:02:31.670492 containerd[1474]: time="2024-10-09T01:02:31.670258560Z" level=info msg="ImageCreate event name:\"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:02:31.703496 containerd[1474]: time="2024-10-09T01:02:31.701613113Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:02:31.703496 containerd[1474]: time="2024-10-09T01:02:31.703281750Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"41912266\" in 3.437114939s" Oct 9 01:02:31.703496 containerd[1474]: time="2024-10-09T01:02:31.703344361Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\"" Oct 9 01:02:31.709775 containerd[1474]: time="2024-10-09T01:02:31.709692405Z" level=info msg="CreateContainer within sandbox \"424a635955575d010fed87e32ce96383916d45a2f3e21a376915f6d834d0cce6\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 9 01:02:31.747043 containerd[1474]: time="2024-10-09T01:02:31.746968962Z" level=info msg="CreateContainer within sandbox \"424a635955575d010fed87e32ce96383916d45a2f3e21a376915f6d834d0cce6\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"fb8dbb36c6a06b10fb6e2617f3d6dbf955dfbef69b3e88c01b6b72ca8088b9d7\"" Oct 9 01:02:31.753547 containerd[1474]: time="2024-10-09T01:02:31.749371934Z" level=info msg="StartContainer for \"fb8dbb36c6a06b10fb6e2617f3d6dbf955dfbef69b3e88c01b6b72ca8088b9d7\"" Oct 9 01:02:31.863037 systemd[1]: Started cri-containerd-fb8dbb36c6a06b10fb6e2617f3d6dbf955dfbef69b3e88c01b6b72ca8088b9d7.scope - libcontainer container fb8dbb36c6a06b10fb6e2617f3d6dbf955dfbef69b3e88c01b6b72ca8088b9d7. Oct 9 01:02:32.008642 containerd[1474]: time="2024-10-09T01:02:32.008349760Z" level=info msg="StartContainer for \"fb8dbb36c6a06b10fb6e2617f3d6dbf955dfbef69b3e88c01b6b72ca8088b9d7\" returns successfully" Oct 9 01:02:32.904480 kubelet[2577]: I1009 01:02:32.901771 2577 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-868dcb9594-99hql" podStartSLOduration=3.460731041 podStartE2EDuration="6.901736553s" podCreationTimestamp="2024-10-09 01:02:26 +0000 UTC" firstStartedPulling="2024-10-09 01:02:28.26451843 +0000 UTC m=+89.798351937" lastFinishedPulling="2024-10-09 01:02:31.705523925 +0000 UTC m=+93.239357449" observedRunningTime="2024-10-09 01:02:32.255088944 +0000 UTC m=+93.788922474" watchObservedRunningTime="2024-10-09 01:02:32.901736553 +0000 UTC m=+94.435570091" Oct 9 01:02:33.600966 kubelet[2577]: E1009 01:02:33.600892 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 01:02:33.602866 kubelet[2577]: E1009 01:02:33.602791 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 01:02:35.552421 systemd[1]: Started sshd@19-165.232.149.110:22-139.178.68.195:53956.service - OpenSSH per-connection server daemon (139.178.68.195:53956). Oct 9 01:02:35.705301 sshd[5288]: Accepted publickey for core from 139.178.68.195 port 53956 ssh2: RSA SHA256:rAraWF6dAhtbVQzAuCRwvYKxEoENakeAe95MuXIlOkk Oct 9 01:02:35.708084 sshd[5288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:02:35.722076 systemd-logind[1456]: New session 20 of user core. Oct 9 01:02:35.726977 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 9 01:02:36.511605 sshd[5288]: pam_unix(sshd:session): session closed for user core Oct 9 01:02:36.527342 systemd[1]: sshd@19-165.232.149.110:22-139.178.68.195:53956.service: Deactivated successfully. Oct 9 01:02:36.531514 systemd[1]: session-20.scope: Deactivated successfully. Oct 9 01:02:36.536092 systemd-logind[1456]: Session 20 logged out. Waiting for processes to exit. Oct 9 01:02:36.549651 systemd[1]: Started sshd@20-165.232.149.110:22-139.178.68.195:53972.service - OpenSSH per-connection server daemon (139.178.68.195:53972). Oct 9 01:02:36.553589 systemd-logind[1456]: Removed session 20. Oct 9 01:02:36.627127 sshd[5301]: Accepted publickey for core from 139.178.68.195 port 53972 ssh2: RSA SHA256:rAraWF6dAhtbVQzAuCRwvYKxEoENakeAe95MuXIlOkk Oct 9 01:02:36.631458 sshd[5301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:02:36.640296 systemd-logind[1456]: New session 21 of user core. Oct 9 01:02:36.649813 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 9 01:02:37.152100 sshd[5301]: pam_unix(sshd:session): session closed for user core Oct 9 01:02:37.166685 systemd[1]: sshd@20-165.232.149.110:22-139.178.68.195:53972.service: Deactivated successfully. Oct 9 01:02:37.171366 systemd[1]: session-21.scope: Deactivated successfully. Oct 9 01:02:37.172848 systemd-logind[1456]: Session 21 logged out. Waiting for processes to exit. Oct 9 01:02:37.185621 systemd[1]: Started sshd@21-165.232.149.110:22-139.178.68.195:53982.service - OpenSSH per-connection server daemon (139.178.68.195:53982). Oct 9 01:02:37.192189 systemd-logind[1456]: Removed session 21. Oct 9 01:02:37.272781 sshd[5312]: Accepted publickey for core from 139.178.68.195 port 53982 ssh2: RSA SHA256:rAraWF6dAhtbVQzAuCRwvYKxEoENakeAe95MuXIlOkk Oct 9 01:02:37.276006 sshd[5312]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:02:37.284529 systemd-logind[1456]: New session 22 of user core. Oct 9 01:02:37.293800 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 9 01:02:39.871060 sshd[5312]: pam_unix(sshd:session): session closed for user core Oct 9 01:02:39.891185 systemd[1]: Started sshd@22-165.232.149.110:22-139.178.68.195:53984.service - OpenSSH per-connection server daemon (139.178.68.195:53984). Oct 9 01:02:39.892787 systemd[1]: sshd@21-165.232.149.110:22-139.178.68.195:53982.service: Deactivated successfully. Oct 9 01:02:39.900616 systemd[1]: session-22.scope: Deactivated successfully. Oct 9 01:02:39.905596 systemd-logind[1456]: Session 22 logged out. Waiting for processes to exit. Oct 9 01:02:39.912890 systemd-logind[1456]: Removed session 22. Oct 9 01:02:40.016371 sshd[5330]: Accepted publickey for core from 139.178.68.195 port 53984 ssh2: RSA SHA256:rAraWF6dAhtbVQzAuCRwvYKxEoENakeAe95MuXIlOkk Oct 9 01:02:40.017791 sshd[5330]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:02:40.025042 systemd-logind[1456]: New session 23 of user core. Oct 9 01:02:40.032782 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 9 01:02:40.784696 sshd[5330]: pam_unix(sshd:session): session closed for user core Oct 9 01:02:40.799964 systemd[1]: sshd@22-165.232.149.110:22-139.178.68.195:53984.service: Deactivated successfully. Oct 9 01:02:40.804123 systemd[1]: session-23.scope: Deactivated successfully. Oct 9 01:02:40.806724 systemd-logind[1456]: Session 23 logged out. Waiting for processes to exit. Oct 9 01:02:40.817075 systemd[1]: Started sshd@23-165.232.149.110:22-139.178.68.195:56654.service - OpenSSH per-connection server daemon (139.178.68.195:56654). Oct 9 01:02:40.824184 systemd-logind[1456]: Removed session 23. Oct 9 01:02:40.880534 sshd[5344]: Accepted publickey for core from 139.178.68.195 port 56654 ssh2: RSA SHA256:rAraWF6dAhtbVQzAuCRwvYKxEoENakeAe95MuXIlOkk Oct 9 01:02:40.882154 sshd[5344]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:02:40.891620 systemd-logind[1456]: New session 24 of user core. Oct 9 01:02:40.899751 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 9 01:02:41.082295 sshd[5344]: pam_unix(sshd:session): session closed for user core Oct 9 01:02:41.091214 systemd[1]: sshd@23-165.232.149.110:22-139.178.68.195:56654.service: Deactivated successfully. Oct 9 01:02:41.094985 systemd[1]: session-24.scope: Deactivated successfully. Oct 9 01:02:41.097001 systemd-logind[1456]: Session 24 logged out. Waiting for processes to exit. Oct 9 01:02:41.099187 systemd-logind[1456]: Removed session 24. Oct 9 01:02:46.103953 systemd[1]: Started sshd@24-165.232.149.110:22-139.178.68.195:56664.service - OpenSSH per-connection server daemon (139.178.68.195:56664). Oct 9 01:02:46.159450 sshd[5362]: Accepted publickey for core from 139.178.68.195 port 56664 ssh2: RSA SHA256:rAraWF6dAhtbVQzAuCRwvYKxEoENakeAe95MuXIlOkk Oct 9 01:02:46.161700 sshd[5362]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:02:46.169295 systemd-logind[1456]: New session 25 of user core. Oct 9 01:02:46.178784 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 9 01:02:46.347379 sshd[5362]: pam_unix(sshd:session): session closed for user core Oct 9 01:02:46.363971 systemd[1]: sshd@24-165.232.149.110:22-139.178.68.195:56664.service: Deactivated successfully. Oct 9 01:02:46.367804 systemd[1]: session-25.scope: Deactivated successfully. Oct 9 01:02:46.369508 systemd-logind[1456]: Session 25 logged out. Waiting for processes to exit. Oct 9 01:02:46.371694 systemd-logind[1456]: Removed session 25. Oct 9 01:02:51.369187 systemd[1]: Started sshd@25-165.232.149.110:22-139.178.68.195:35384.service - OpenSSH per-connection server daemon (139.178.68.195:35384). Oct 9 01:02:51.484374 sshd[5405]: Accepted publickey for core from 139.178.68.195 port 35384 ssh2: RSA SHA256:rAraWF6dAhtbVQzAuCRwvYKxEoENakeAe95MuXIlOkk Oct 9 01:02:51.486690 sshd[5405]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:02:51.496102 systemd-logind[1456]: New session 26 of user core. Oct 9 01:02:51.501708 systemd[1]: Started session-26.scope - Session 26 of User core. Oct 9 01:02:51.827902 sshd[5405]: pam_unix(sshd:session): session closed for user core Oct 9 01:02:51.839799 systemd[1]: sshd@25-165.232.149.110:22-139.178.68.195:35384.service: Deactivated successfully. Oct 9 01:02:51.846906 systemd[1]: session-26.scope: Deactivated successfully. Oct 9 01:02:51.849779 systemd-logind[1456]: Session 26 logged out. Waiting for processes to exit. Oct 9 01:02:51.852093 systemd-logind[1456]: Removed session 26. Oct 9 01:02:52.601933 kubelet[2577]: E1009 01:02:52.600810 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 01:02:55.601138 kubelet[2577]: E1009 01:02:55.601057 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 01:02:55.937541 systemd[1]: run-containerd-runc-k8s.io-6ef94a37980f3514fcb24289a5310743b9ff95243d78be09aff74848bef684a7-runc.CYebo2.mount: Deactivated successfully. Oct 9 01:02:56.858883 systemd[1]: Started sshd@26-165.232.149.110:22-139.178.68.195:35396.service - OpenSSH per-connection server daemon (139.178.68.195:35396). Oct 9 01:02:56.927516 sshd[5443]: Accepted publickey for core from 139.178.68.195 port 35396 ssh2: RSA SHA256:rAraWF6dAhtbVQzAuCRwvYKxEoENakeAe95MuXIlOkk Oct 9 01:02:56.930887 sshd[5443]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:02:56.944912 systemd-logind[1456]: New session 27 of user core. Oct 9 01:02:56.949825 systemd[1]: Started session-27.scope - Session 27 of User core. Oct 9 01:02:57.156168 sshd[5443]: pam_unix(sshd:session): session closed for user core Oct 9 01:02:57.163960 systemd-logind[1456]: Session 27 logged out. Waiting for processes to exit. Oct 9 01:02:57.166788 systemd[1]: sshd@26-165.232.149.110:22-139.178.68.195:35396.service: Deactivated successfully. Oct 9 01:02:57.173517 systemd[1]: session-27.scope: Deactivated successfully. Oct 9 01:02:57.176785 systemd-logind[1456]: Removed session 27. Oct 9 01:03:02.178000 systemd[1]: Started sshd@27-165.232.149.110:22-139.178.68.195:49314.service - OpenSSH per-connection server daemon (139.178.68.195:49314). Oct 9 01:03:02.259002 sshd[5459]: Accepted publickey for core from 139.178.68.195 port 49314 ssh2: RSA SHA256:rAraWF6dAhtbVQzAuCRwvYKxEoENakeAe95MuXIlOkk Oct 9 01:03:02.261577 sshd[5459]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:03:02.270530 systemd-logind[1456]: New session 28 of user core. Oct 9 01:03:02.280814 systemd[1]: Started session-28.scope - Session 28 of User core. Oct 9 01:03:02.488611 sshd[5459]: pam_unix(sshd:session): session closed for user core Oct 9 01:03:02.495265 systemd[1]: sshd@27-165.232.149.110:22-139.178.68.195:49314.service: Deactivated successfully. Oct 9 01:03:02.499703 systemd[1]: session-28.scope: Deactivated successfully. Oct 9 01:03:02.501510 systemd-logind[1456]: Session 28 logged out. Waiting for processes to exit. Oct 9 01:03:02.502996 systemd-logind[1456]: Removed session 28.